Package org.opencv.ximgproc
Class Ximgproc
- java.lang.Object
-
- org.opencv.ximgproc.Ximgproc
-
public class Ximgproc extends java.lang.Object
-
-
Field Summary
Fields Modifier and Type Field Description static int
AM_FILTER
static int
ARO_0_45
static int
ARO_315_0
static int
ARO_315_135
static int
ARO_315_45
static int
ARO_45_135
static int
ARO_45_90
static int
ARO_90_135
static int
ARO_CTR_HOR
static int
ARO_CTR_VER
static int
BINARIZATION_NIBLACK
static int
BINARIZATION_NICK
static int
BINARIZATION_SAUVOLA
static int
BINARIZATION_WOLF
static int
DTF_IC
static int
DTF_NC
static int
DTF_RF
static int
FHT_ADD
static int
FHT_AVE
static int
FHT_MAX
static int
FHT_MIN
static int
GUIDED_FILTER
static int
HDO_DESKEW
static int
HDO_RAW
static int
MSLIC
static int
SLIC
static int
SLICO
static int
THINNING_GUOHALL
static int
THINNING_ZHANGSUEN
static int
WMF_COS
static int
WMF_EXP
static int
WMF_IV1
static int
WMF_IV2
static int
WMF_JAC
static int
WMF_OFF
-
Constructor Summary
Constructors Constructor Description Ximgproc()
-
Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method Description static void
amFilter(Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r)
Simple one-line Adaptive Manifold Filter call.static void
amFilter(Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r, boolean adjust_outliers)
Simple one-line Adaptive Manifold Filter call.static void
anisotropicDiffusion(Mat src, Mat dst, float alpha, float K, int niters)
Performs anisotropic diffusion on an image.static void
bilateralTextureFilter(Mat src, Mat dst)
Applies the bilateral texture filter to an image.static void
bilateralTextureFilter(Mat src, Mat dst, int fr)
Applies the bilateral texture filter to an image.static void
bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter)
Applies the bilateral texture filter to an image.static void
bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter, double sigmaAlpha)
Applies the bilateral texture filter to an image.static void
bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter, double sigmaAlpha, double sigmaAvg)
Applies the bilateral texture filter to an image.static double
computeBadPixelPercent(Mat GT, Mat src, Rect ROI)
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)static double
computeBadPixelPercent(Mat GT, Mat src, Rect ROI, int thresh)
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)static double
computeMSE(Mat GT, Mat src, Rect ROI)
Function for computing mean square error for disparity mapsstatic void
contourSampling(Mat src, Mat out, int nbElt)
Contour sampling .static void
covarianceEstimation(Mat src, Mat dst, int windowRows, int windowCols)
Computes the estimated covariance matrix of an image using the sliding window forumlation.static AdaptiveManifoldFilter
createAMFilter(double sigma_s, double sigma_r)
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.static AdaptiveManifoldFilter
createAMFilter(double sigma_s, double sigma_r, boolean adjust_outliers)
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.static ContourFitting
createContourFitting()
create ContourFitting algorithm objectstatic ContourFitting
createContourFitting(int ctr)
create ContourFitting algorithm objectstatic ContourFitting
createContourFitting(int ctr, int fd)
create ContourFitting algorithm objectstatic DisparityWLSFilter
createDisparityWLSFilter(StereoMatcher matcher_left)
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance.static DisparityWLSFilter
createDisparityWLSFilterGeneric(boolean use_confidence)
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines.static DTFilter
createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor)
Factory method, create instance of DTFilter and produce initialization routines.static DTFilter
createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor, int mode)
Factory method, create instance of DTFilter and produce initialization routines.static DTFilter
createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor, int mode, int numIters)
Factory method, create instance of DTFilter and produce initialization routines.static EdgeAwareInterpolator
createEdgeAwareInterpolator()
Factory method that creates an instance of the EdgeAwareInterpolator.static EdgeBoxes
createEdgeBoxes()
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma)
Creates a Edgeboxesstatic EdgeBoxes
createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma, float kappa)
Creates a Edgeboxesstatic EdgeDrawing
createEdgeDrawing()
Creates a smart pointer to a EdgeDrawing object and initializes itstatic FastBilateralSolverFilter
createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.static FastBilateralSolverFilter
createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.static FastBilateralSolverFilter
createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.static FastBilateralSolverFilter
createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.static FastGlobalSmootherFilter
createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.static FastGlobalSmootherFilter
createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color, double lambda_attenuation)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.static FastGlobalSmootherFilter
createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color, double lambda_attenuation, int num_iter)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.static FastLineDetector
createFastLineDetector()
Creates a smart pointer to a FastLineDetector object and initializes it segment farther than this will be regarded as an outlier If zero, Canny() is not applied and the input image is taken as an edge image.static FastLineDetector
createFastLineDetector(int length_threshold)
Creates a smart pointer to a FastLineDetector object and initializes itstatic FastLineDetector
createFastLineDetector(int length_threshold, float distance_threshold)
Creates a smart pointer to a FastLineDetector object and initializes itstatic FastLineDetector
createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1)
Creates a smart pointer to a FastLineDetector object and initializes itstatic FastLineDetector
createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2)
Creates a smart pointer to a FastLineDetector object and initializes itstatic FastLineDetector
createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size)
Creates a smart pointer to a FastLineDetector object and initializes itstatic FastLineDetector
createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size, boolean do_merge)
Creates a smart pointer to a FastLineDetector object and initializes itstatic GraphSegmentation
createGraphSegmentation()
Creates a graph based segmentorstatic GraphSegmentation
createGraphSegmentation(double sigma)
Creates a graph based segmentorstatic GraphSegmentation
createGraphSegmentation(double sigma, float k)
Creates a graph based segmentorstatic GraphSegmentation
createGraphSegmentation(double sigma, float k, int min_size)
Creates a graph based segmentorstatic GuidedFilter
createGuidedFilter(Mat guide, int radius, double eps)
Factory method, create instance of GuidedFilter and produce initialization routines.static RFFeatureGetter
createRFFeatureGetter()
static StereoMatcher
createRightMatcher(StereoMatcher matcher_left)
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.static SelectiveSearchSegmentation
createSelectiveSearchSegmentation()
Create a new SelectiveSearchSegmentation class.static SelectiveSearchSegmentationStrategyColor
createSelectiveSearchSegmentationStrategyColor()
Create a new color-based strategystatic SelectiveSearchSegmentationStrategyFill
createSelectiveSearchSegmentationStrategyFill()
Create a new fill-based strategystatic SelectiveSearchSegmentationStrategyMultiple
createSelectiveSearchSegmentationStrategyMultiple()
Create a new multiple strategystatic SelectiveSearchSegmentationStrategyMultiple
createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1)
Create a new multiple strategy and set one subtrategystatic SelectiveSearchSegmentationStrategyMultiple
createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2)
Create a new multiple strategy and set two subtrategies, with equal weightsstatic SelectiveSearchSegmentationStrategyMultiple
createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3)
Create a new multiple strategy and set three subtrategies, with equal weightsstatic SelectiveSearchSegmentationStrategyMultiple
createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3, SelectiveSearchSegmentationStrategy s4)
Create a new multiple strategy and set four subtrategies, with equal weightsstatic SelectiveSearchSegmentationStrategySize
createSelectiveSearchSegmentationStrategySize()
Create a new size-based strategystatic SelectiveSearchSegmentationStrategyTexture
createSelectiveSearchSegmentationStrategyTexture()
Create a new size-based strategystatic StructuredEdgeDetection
createStructuredEdgeDetection(java.lang.String model)
static StructuredEdgeDetection
createStructuredEdgeDetection(java.lang.String model, RFFeatureGetter howToGetFeatures)
static SuperpixelLSC
createSuperpixelLSC(Mat image)
Class implementing the LSC (Linear Spectral Clustering) superpixelsstatic SuperpixelLSC
createSuperpixelLSC(Mat image, int region_size)
Class implementing the LSC (Linear Spectral Clustering) superpixelsstatic SuperpixelLSC
createSuperpixelLSC(Mat image, int region_size, float ratio)
Class implementing the LSC (Linear Spectral Clustering) superpixelsstatic SuperpixelSEEDS
createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels)
Initializes a SuperpixelSEEDS object.static SuperpixelSEEDS
createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior)
Initializes a SuperpixelSEEDS object.static SuperpixelSEEDS
createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins)
Initializes a SuperpixelSEEDS object.static SuperpixelSEEDS
createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins, boolean double_step)
Initializes a SuperpixelSEEDS object.static SuperpixelSLIC
createSuperpixelSLIC(Mat image)
Initialize a SuperpixelSLIC objectstatic SuperpixelSLIC
createSuperpixelSLIC(Mat image, int algorithm)
Initialize a SuperpixelSLIC objectstatic SuperpixelSLIC
createSuperpixelSLIC(Mat image, int algorithm, int region_size)
Initialize a SuperpixelSLIC objectstatic SuperpixelSLIC
createSuperpixelSLIC(Mat image, int algorithm, int region_size, float ruler)
Initialize a SuperpixelSLIC objectstatic void
dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor)
Simple one-line Domain Transform filter call.static void
dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode)
Simple one-line Domain Transform filter call.static void
dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode, int numIters)
Simple one-line Domain Transform filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter)
Simple one-line Fast Bilateral Solver filter call.static void
fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol)
Simple one-line Fast Bilateral Solver filter call.static void
fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color)
Simple one-line Fast Global Smoother filter call.static void
fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation)
Simple one-line Fast Global Smoother filter call.static void
fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation, int num_iter)
Simple one-line Fast Global Smoother filter call.static void
FastHoughTransform(Mat src, Mat dst, int dstMatDepth)
Calculates 2D Fast Hough transform of an image.static void
FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange)
Calculates 2D Fast Hough transform of an image.static void
FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange, int op)
Calculates 2D Fast Hough transform of an image.static void
FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange, int op, int makeSkew)
Calculates 2D Fast Hough transform of an image.static void
fourierDescriptor(Mat src, Mat dst)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977static void
fourierDescriptor(Mat src, Mat dst, int nbElt)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977static void
fourierDescriptor(Mat src, Mat dst, int nbElt, int nbFD)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977static void
getDisparityVis(Mat src, Mat dst)
Function for creating a disparity map visualization (clamped CV_8U image)static void
getDisparityVis(Mat src, Mat dst, double scale)
Function for creating a disparity map visualization (clamped CV_8U image)static void
GradientDericheX(Mat op, Mat dst, double alpha, double omega)
Applies X Deriche filter to an image.static void
GradientDericheY(Mat op, Mat dst, double alpha, double omega)
Applies Y Deriche filter to an image.static void
guidedFilter(Mat guide, Mat src, Mat dst, int radius, double eps)
Simple one-line Guided Filter call.static void
guidedFilter(Mat guide, Mat src, Mat dst, int radius, double eps, int dDepth)
Simple one-line Guided Filter call.static void
jointBilateralFilter(Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace)
Applies the joint bilateral filter to an image.static void
jointBilateralFilter(Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int borderType)
Applies the joint bilateral filter to an image.static void
l0Smooth(Mat src, Mat dst)
Global image smoothing via L0 gradient minimization.static void
l0Smooth(Mat src, Mat dst, double lambda)
Global image smoothing via L0 gradient minimization.static void
l0Smooth(Mat src, Mat dst, double lambda, double kappa)
Global image smoothing via L0 gradient minimization.static void
niBlackThreshold(Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k)
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.static void
niBlackThreshold(Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod)
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.static void
PeiLinNormalization(Mat I, Mat T)
static int
readGT(java.lang.String src_path, Mat dst)
Function for reading ground truth disparity maps.static void
rollingGuidanceFilter(Mat src, Mat dst)
Applies the rolling guidance filter to an image.static void
rollingGuidanceFilter(Mat src, Mat dst, int d)
Applies the rolling guidance filter to an image.static void
rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor)
Applies the rolling guidance filter to an image.static void
rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace)
Applies the rolling guidance filter to an image.static void
rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter)
Applies the rolling guidance filter to an image.static void
rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter, int borderType)
Applies the rolling guidance filter to an image.static void
thinning(Mat src, Mat dst)
Applies a binary blob thinning operation, to achieve a skeletization of the input image.static void
thinning(Mat src, Mat dst, int thinningType)
Applies a binary blob thinning operation, to achieve a skeletization of the input image.static void
transformFD(Mat src, Mat t, Mat dst)
transform a contourstatic void
transformFD(Mat src, Mat t, Mat dst, boolean fdContour)
transform a contourstatic void
weightedMedianFilter(Mat joint, Mat src, Mat dst, int r)
Applies weighted median filter to an image.static void
weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma)
Applies weighted median filter to an image.static void
weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma, int weightType)
Applies weighted median filter to an image.static void
weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma, int weightType, Mat mask)
Applies weighted median filter to an image.
-
-
-
Field Detail
-
ARO_0_45
public static final int ARO_0_45
- See Also:
- Constant Field Values
-
ARO_45_90
public static final int ARO_45_90
- See Also:
- Constant Field Values
-
ARO_90_135
public static final int ARO_90_135
- See Also:
- Constant Field Values
-
ARO_315_0
public static final int ARO_315_0
- See Also:
- Constant Field Values
-
ARO_315_45
public static final int ARO_315_45
- See Also:
- Constant Field Values
-
ARO_45_135
public static final int ARO_45_135
- See Also:
- Constant Field Values
-
ARO_315_135
public static final int ARO_315_135
- See Also:
- Constant Field Values
-
ARO_CTR_HOR
public static final int ARO_CTR_HOR
- See Also:
- Constant Field Values
-
ARO_CTR_VER
public static final int ARO_CTR_VER
- See Also:
- Constant Field Values
-
DTF_NC
public static final int DTF_NC
- See Also:
- Constant Field Values
-
DTF_IC
public static final int DTF_IC
- See Also:
- Constant Field Values
-
DTF_RF
public static final int DTF_RF
- See Also:
- Constant Field Values
-
GUIDED_FILTER
public static final int GUIDED_FILTER
- See Also:
- Constant Field Values
-
AM_FILTER
public static final int AM_FILTER
- See Also:
- Constant Field Values
-
HDO_RAW
public static final int HDO_RAW
- See Also:
- Constant Field Values
-
HDO_DESKEW
public static final int HDO_DESKEW
- See Also:
- Constant Field Values
-
FHT_MIN
public static final int FHT_MIN
- See Also:
- Constant Field Values
-
FHT_MAX
public static final int FHT_MAX
- See Also:
- Constant Field Values
-
FHT_ADD
public static final int FHT_ADD
- See Also:
- Constant Field Values
-
FHT_AVE
public static final int FHT_AVE
- See Also:
- Constant Field Values
-
BINARIZATION_NIBLACK
public static final int BINARIZATION_NIBLACK
- See Also:
- Constant Field Values
-
BINARIZATION_SAUVOLA
public static final int BINARIZATION_SAUVOLA
- See Also:
- Constant Field Values
-
BINARIZATION_WOLF
public static final int BINARIZATION_WOLF
- See Also:
- Constant Field Values
-
BINARIZATION_NICK
public static final int BINARIZATION_NICK
- See Also:
- Constant Field Values
-
SLIC
public static final int SLIC
- See Also:
- Constant Field Values
-
SLICO
public static final int SLICO
- See Also:
- Constant Field Values
-
MSLIC
public static final int MSLIC
- See Also:
- Constant Field Values
-
THINNING_ZHANGSUEN
public static final int THINNING_ZHANGSUEN
- See Also:
- Constant Field Values
-
THINNING_GUOHALL
public static final int THINNING_GUOHALL
- See Also:
- Constant Field Values
-
WMF_EXP
public static final int WMF_EXP
- See Also:
- Constant Field Values
-
WMF_IV1
public static final int WMF_IV1
- See Also:
- Constant Field Values
-
WMF_IV2
public static final int WMF_IV2
- See Also:
- Constant Field Values
-
WMF_COS
public static final int WMF_COS
- See Also:
- Constant Field Values
-
WMF_JAC
public static final int WMF_JAC
- See Also:
- Constant Field Values
-
WMF_OFF
public static final int WMF_OFF
- See Also:
- Constant Field Values
-
-
Method Detail
-
niBlackThreshold
public static void niBlackThreshold(Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod)
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. The function transforms a grayscale image to a binary image according to the formulae:- THRESH_BINARY \(dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\)
- THRESH_BINARY_INV \(dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\) where \(T(x,y)\) is a threshold calculated individually for each pixel.
- Parameters:
_src
- Source 8-bit single-channel image._dst
- Destination image of the same size and the same type as src.maxValue
- Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.type
- Thresholding type, see cv::ThresholdTypes.blockSize
- Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.k
- The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.binarizationMethod
- Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. SEE: threshold, adaptiveThreshold
-
niBlackThreshold
public static void niBlackThreshold(Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k)
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. The function transforms a grayscale image to a binary image according to the formulae:- THRESH_BINARY \(dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\)
- THRESH_BINARY_INV \(dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\) where \(T(x,y)\) is a threshold calculated individually for each pixel.
- Parameters:
_src
- Source 8-bit single-channel image._dst
- Destination image of the same size and the same type as src.maxValue
- Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.type
- Thresholding type, see cv::ThresholdTypes.blockSize
- Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.k
- The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. SEE: threshold, adaptiveThreshold
-
thinning
public static void thinning(Mat src, Mat dst, int thinningType)
Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.- Parameters:
src
- Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.dst
- Destination image of the same size and the same type as src. The function can work in-place.thinningType
- Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes
-
thinning
public static void thinning(Mat src, Mat dst)
Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.- Parameters:
src
- Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.dst
- Destination image of the same size and the same type as src. The function can work in-place.
-
anisotropicDiffusion
public static void anisotropicDiffusion(Mat src, Mat dst, float alpha, float K, int niters)
Performs anisotropic diffusion on an image. The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \({\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\) Suggested functions for c(x,y,t) are: \(c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\) or \( c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \)- Parameters:
src
- Source image with 3 channels.dst
- Destination image of the same size and the same number of channels as src .alpha
- The amount of time to step forward by on each iteration (normally, it's between 0 and 1).K
- sensitivity to the edgesniters
- The number of iterations
-
createSuperpixelSEEDS
public static SuperpixelSEEDS createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins, boolean double_step)
Initializes a SuperpixelSEEDS object.- Parameters:
image_width
- Image width.image_height
- Image height.image_channels
- Number of channels of the image.num_superpixels
- Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.num_levels
- Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.prior
- enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].histogram_bins
- Number of histogram bins.double_step
- If true, iterate each block level twice for higher accuracy. The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. ![image](pics/superpixels_blocks.png)- Returns:
- automatically generated
-
createSuperpixelSEEDS
public static SuperpixelSEEDS createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins)
Initializes a SuperpixelSEEDS object.- Parameters:
image_width
- Image width.image_height
- Image height.image_channels
- Number of channels of the image.num_superpixels
- Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.num_levels
- Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.prior
- enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].histogram_bins
- Number of histogram bins. The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. ![image](pics/superpixels_blocks.png)- Returns:
- automatically generated
-
createSuperpixelSEEDS
public static SuperpixelSEEDS createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior)
Initializes a SuperpixelSEEDS object.- Parameters:
image_width
- Image width.image_height
- Image height.image_channels
- Number of channels of the image.num_superpixels
- Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.num_levels
- Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.prior
- enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. ![image](pics/superpixels_blocks.png)- Returns:
- automatically generated
-
createSuperpixelSEEDS
public static SuperpixelSEEDS createSuperpixelSEEDS(int image_width, int image_height, int image_channels, int num_superpixels, int num_levels)
Initializes a SuperpixelSEEDS object.- Parameters:
image_width
- Image width.image_height
- Image height.image_channels
- Number of channels of the image.num_superpixels
- Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.num_levels
- Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. must be in the range [0, 5]. The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. ![image](pics/superpixels_blocks.png)- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size, boolean do_merge)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discardeddistance_threshold
- A point placed from a hypothesis line segment farther than this will be regarded as an outliercanny_th1
- First threshold for hysteresis procedure in Canny()canny_th2
- Second threshold for hysteresis procedure in Canny()canny_aperture_size
- Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.do_merge
- If true, incremental merging of segments will be performed- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discardeddistance_threshold
- A point placed from a hypothesis line segment farther than this will be regarded as an outliercanny_th1
- First threshold for hysteresis procedure in Canny()canny_th2
- Second threshold for hysteresis procedure in Canny()canny_aperture_size
- Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1, double canny_th2)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discardeddistance_threshold
- A point placed from a hypothesis line segment farther than this will be regarded as an outliercanny_th1
- First threshold for hysteresis procedure in Canny()canny_th2
- Second threshold for hysteresis procedure in Canny() If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold, float distance_threshold, double canny_th1)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discardeddistance_threshold
- A point placed from a hypothesis line segment farther than this will be regarded as an outliercanny_th1
- First threshold for hysteresis procedure in Canny() If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold, float distance_threshold)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discardeddistance_threshold
- A point placed from a hypothesis line segment farther than this will be regarded as an outlier If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector(int length_threshold)
Creates a smart pointer to a FastLineDetector object and initializes it- Parameters:
length_threshold
- Segment shorter than this will be discarded segment farther than this will be regarded as an outlier If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createFastLineDetector
public static FastLineDetector createFastLineDetector()
Creates a smart pointer to a FastLineDetector object and initializes it segment farther than this will be regarded as an outlier If zero, Canny() is not applied and the input image is taken as an edge image.- Returns:
- automatically generated
-
createSuperpixelSLIC
public static SuperpixelSLIC createSuperpixelSLIC(Mat image, int algorithm, int region_size, float ruler)
Initialize a SuperpixelSLIC object- Parameters:
image
- Image to segmentalgorithm
- Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.region_size
- Chooses an average superpixel size measured in pixelsruler
- Chooses the enforcement of superpixel smoothness factor of superpixel The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. ![image](pics/superpixels_slic.png)- Returns:
- automatically generated
-
createSuperpixelSLIC
public static SuperpixelSLIC createSuperpixelSLIC(Mat image, int algorithm, int region_size)
Initialize a SuperpixelSLIC object- Parameters:
image
- Image to segmentalgorithm
- Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.region_size
- Chooses an average superpixel size measured in pixels The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. ![image](pics/superpixels_slic.png)- Returns:
- automatically generated
-
createSuperpixelSLIC
public static SuperpixelSLIC createSuperpixelSLIC(Mat image, int algorithm)
Initialize a SuperpixelSLIC object- Parameters:
image
- Image to segmentalgorithm
- Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. ![image](pics/superpixels_slic.png)- Returns:
- automatically generated
-
createSuperpixelSLIC
public static SuperpixelSLIC createSuperpixelSLIC(Mat image)
Initialize a SuperpixelSLIC object- Parameters:
image
- Image to segment SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. ![image](pics/superpixels_slic.png)- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma, float kappa)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.clusterMinMag
- cluster min magnitude. Increase to trade off accuracy for speed.maxAspectRatio
- max aspect ratio of boxes.minBoxArea
- minimum area of boxes.gamma
- affinity sensitivity.kappa
- scale sensitivity.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.clusterMinMag
- cluster min magnitude. Increase to trade off accuracy for speed.maxAspectRatio
- max aspect ratio of boxes.minBoxArea
- minimum area of boxes.gamma
- affinity sensitivity.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.clusterMinMag
- cluster min magnitude. Increase to trade off accuracy for speed.maxAspectRatio
- max aspect ratio of boxes.minBoxArea
- minimum area of boxes.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.clusterMinMag
- cluster min magnitude. Increase to trade off accuracy for speed.maxAspectRatio
- max aspect ratio of boxes.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.clusterMinMag
- cluster min magnitude. Increase to trade off accuracy for speed.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.edgeMergeThr
- edge merge threshold. Increase to trade off accuracy for speed.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.edgeMinMag
- edge min magnitude. Increase to trade off accuracy for speed.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore, int maxBoxes)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.maxBoxes
- max number of boxes to detect.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta, float minScore)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.minScore
- min score of boxes to detect.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta, float eta)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.eta
- adaptation rate for nms threshold.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha, float beta)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.beta
- nms threshold for object proposals.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes(float alpha)
Creates a Edgeboxes- Parameters:
alpha
- step size of sliding window search.- Returns:
- automatically generated
-
createEdgeBoxes
public static EdgeBoxes createEdgeBoxes()
Creates a Edgeboxes- Returns:
- automatically generated
-
covarianceEstimation
public static void covarianceEstimation(Mat src, Mat dst, int windowRows, int windowCols)
Computes the estimated covariance matrix of an image using the sliding window forumlation.- Parameters:
src
- The source image. Input image must be of a complex type.dst
- The destination estimated covariance matrix. Output matrix will be size (windowRows*windowCols, windowRows*windowCols).windowRows
- The number of rows in the window.windowCols
- The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.
-
createEdgeAwareInterpolator
public static EdgeAwareInterpolator createEdgeAwareInterpolator()
Factory method that creates an instance of the EdgeAwareInterpolator.- Returns:
- automatically generated
-
createDisparityWLSFilter
public static DisparityWLSFilter createDisparityWLSFilter(StereoMatcher matcher_left)
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.- Parameters:
matcher_left
- stereo matcher instance that will be used with the filter- Returns:
- automatically generated
-
createRightMatcher
public static StereoMatcher createRightMatcher(StereoMatcher matcher_left)
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.- Parameters:
matcher_left
- main stereo matcher instance that will be used with the filter- Returns:
- automatically generated
-
createDisparityWLSFilterGeneric
public static DisparityWLSFilter createDisparityWLSFilterGeneric(boolean use_confidence)
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.- Parameters:
use_confidence
- filtering with confidence requires two disparity maps (for the left and right views) and is approximately two times slower. However, quality is typically significantly better.- Returns:
- automatically generated
-
readGT
public static int readGT(java.lang.String src_path, Mat dst)
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.- Parameters:
src_path
- path to the image, containing ground-truth disparity mapdst
- output disparity map, CV_16S depth- Returns:
- returns zero if successfully read the ground truth
-
computeMSE
public static double computeMSE(Mat GT, Mat src, Rect ROI)
Function for computing mean square error for disparity maps- Parameters:
GT
- ground truth disparity mapsrc
- disparity map to evaluateROI
- region of interest- Returns:
- returns mean square error between GT and src
-
computeBadPixelPercent
public static double computeBadPixelPercent(Mat GT, Mat src, Rect ROI, int thresh)
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)- Parameters:
GT
- ground truth disparity mapsrc
- disparity map to evaluateROI
- region of interestthresh
- threshold used to determine "bad" pixels- Returns:
- returns mean square error between GT and src
-
computeBadPixelPercent
public static double computeBadPixelPercent(Mat GT, Mat src, Rect ROI)
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)- Parameters:
GT
- ground truth disparity mapsrc
- disparity map to evaluateROI
- region of interest- Returns:
- returns mean square error between GT and src
-
getDisparityVis
public static void getDisparityVis(Mat src, Mat dst, double scale)
Function for creating a disparity map visualization (clamped CV_8U image)- Parameters:
src
- input disparity map (CV_16S depth)dst
- output visualizationscale
- disparity map will be multiplied by this value for visualization
-
getDisparityVis
public static void getDisparityVis(Mat src, Mat dst)
Function for creating a disparity map visualization (clamped CV_8U image)- Parameters:
src
- input disparity map (CV_16S depth)dst
- output visualization
-
createSuperpixelLSC
public static SuperpixelLSC createSuperpixelLSC(Mat image, int region_size, float ratio)
Class implementing the LSC (Linear Spectral Clustering) superpixels- Parameters:
image
- Image to segmentregion_size
- Chooses an average superpixel size measured in pixelsratio
- Chooses the enforcement of superpixel compactness factor of superpixel The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space. ![image](pics/superpixels_lsc.png)- Returns:
- automatically generated
-
createSuperpixelLSC
public static SuperpixelLSC createSuperpixelLSC(Mat image, int region_size)
Class implementing the LSC (Linear Spectral Clustering) superpixels- Parameters:
image
- Image to segmentregion_size
- Chooses an average superpixel size measured in pixels The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space. ![image](pics/superpixels_lsc.png)- Returns:
- automatically generated
-
createSuperpixelLSC
public static SuperpixelLSC createSuperpixelLSC(Mat image)
Class implementing the LSC (Linear Spectral Clustering) superpixels- Parameters:
image
- Image to segment The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space. ![image](pics/superpixels_lsc.png)- Returns:
- automatically generated
-
fourierDescriptor
public static void fourierDescriptor(Mat src, Mat dst, int nbElt, int nbFD)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977- Parameters:
src
- automatically generateddst
- automatically generatednbElt
- automatically generatednbFD
- automatically generated
-
fourierDescriptor
public static void fourierDescriptor(Mat src, Mat dst, int nbElt)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977- Parameters:
src
- automatically generateddst
- automatically generatednbElt
- automatically generated
-
fourierDescriptor
public static void fourierDescriptor(Mat src, Mat dst)
Fourier descriptors for planed closed curves For more details about this implementation, please see CITE: PersoonFu1977- Parameters:
src
- automatically generateddst
- automatically generated
-
transformFD
public static void transformFD(Mat src, Mat t, Mat dst, boolean fdContour)
transform a contour- Parameters:
src
- automatically generatedt
- automatically generateddst
- automatically generatedfdContour
- automatically generated
-
transformFD
public static void transformFD(Mat src, Mat t, Mat dst)
transform a contour- Parameters:
src
- automatically generatedt
- automatically generateddst
- automatically generated
-
contourSampling
public static void contourSampling(Mat src, Mat out, int nbElt)
Contour sampling .- Parameters:
src
- automatically generatedout
- automatically generatednbElt
- automatically generated
-
createContourFitting
public static ContourFitting createContourFitting(int ctr, int fd)
create ContourFitting algorithm object- Parameters:
ctr
- number of Fourier descriptors equal to number of contour points after resampling.fd
- Contour defining second shape (Target).- Returns:
- automatically generated
-
createContourFitting
public static ContourFitting createContourFitting(int ctr)
create ContourFitting algorithm object- Parameters:
ctr
- number of Fourier descriptors equal to number of contour points after resampling.- Returns:
- automatically generated
-
createContourFitting
public static ContourFitting createContourFitting()
create ContourFitting algorithm object- Returns:
- automatically generated
-
createGraphSegmentation
public static GraphSegmentation createGraphSegmentation(double sigma, float k, int min_size)
Creates a graph based segmentor- Parameters:
sigma
- The sigma parameter, used to smooth imagek
- The k parameter of the algorythmmin_size
- The minimum size of segments- Returns:
- automatically generated
-
createGraphSegmentation
public static GraphSegmentation createGraphSegmentation(double sigma, float k)
Creates a graph based segmentor- Parameters:
sigma
- The sigma parameter, used to smooth imagek
- The k parameter of the algorythm- Returns:
- automatically generated
-
createGraphSegmentation
public static GraphSegmentation createGraphSegmentation(double sigma)
Creates a graph based segmentor- Parameters:
sigma
- The sigma parameter, used to smooth image- Returns:
- automatically generated
-
createGraphSegmentation
public static GraphSegmentation createGraphSegmentation()
Creates a graph based segmentor- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyColor
public static SelectiveSearchSegmentationStrategyColor createSelectiveSearchSegmentationStrategyColor()
Create a new color-based strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategySize
public static SelectiveSearchSegmentationStrategySize createSelectiveSearchSegmentationStrategySize()
Create a new size-based strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyTexture
public static SelectiveSearchSegmentationStrategyTexture createSelectiveSearchSegmentationStrategyTexture()
Create a new size-based strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyFill
public static SelectiveSearchSegmentationStrategyFill createSelectiveSearchSegmentationStrategyFill()
Create a new fill-based strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyMultiple
public static SelectiveSearchSegmentationStrategyMultiple createSelectiveSearchSegmentationStrategyMultiple()
Create a new multiple strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyMultiple
public static SelectiveSearchSegmentationStrategyMultiple createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1)
Create a new multiple strategy and set one subtrategy- Parameters:
s1
- The first strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyMultiple
public static SelectiveSearchSegmentationStrategyMultiple createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2)
Create a new multiple strategy and set two subtrategies, with equal weights- Parameters:
s1
- The first strategys2
- The second strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyMultiple
public static SelectiveSearchSegmentationStrategyMultiple createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3)
Create a new multiple strategy and set three subtrategies, with equal weights- Parameters:
s1
- The first strategys2
- The second strategys3
- The third strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentationStrategyMultiple
public static SelectiveSearchSegmentationStrategyMultiple createSelectiveSearchSegmentationStrategyMultiple(SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3, SelectiveSearchSegmentationStrategy s4)
Create a new multiple strategy and set four subtrategies, with equal weights- Parameters:
s1
- The first strategys2
- The second strategys3
- The third strategys4
- The forth strategy- Returns:
- automatically generated
-
createSelectiveSearchSegmentation
public static SelectiveSearchSegmentation createSelectiveSearchSegmentation()
Create a new SelectiveSearchSegmentation class.- Returns:
- automatically generated
-
createDTFilter
public static DTFilter createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor, int mode, int numIters)
Factory method, create instance of DTFilter and produce initialization routines.- Parameters:
guide
- guided image (used to build transformed distance, which describes edge structure of guided image).sigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.mode
- one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.numIters
- optional number of iterations used for filtering, 3 is quite enough. For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and [Domain Transform filter homepage](http://www.inf.ufrgs.br/~eslgastal/DomainTransform/).- Returns:
- automatically generated
-
createDTFilter
public static DTFilter createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor, int mode)
Factory method, create instance of DTFilter and produce initialization routines.- Parameters:
guide
- guided image (used to build transformed distance, which describes edge structure of guided image).sigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.mode
- one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and [Domain Transform filter homepage](http://www.inf.ufrgs.br/~eslgastal/DomainTransform/).- Returns:
- automatically generated
-
createDTFilter
public static DTFilter createDTFilter(Mat guide, double sigmaSpatial, double sigmaColor)
Factory method, create instance of DTFilter and produce initialization routines.- Parameters:
guide
- guided image (used to build transformed distance, which describes edge structure of guided image).sigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. filtering 2D signals in the article. For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and [Domain Transform filter homepage](http://www.inf.ufrgs.br/~eslgastal/DomainTransform/).- Returns:
- automatically generated
-
dtFilter
public static void dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode, int numIters)
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.- Parameters:
guide
- guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.src
- filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination imagesigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.mode
- one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.numIters
- optional number of iterations used for filtering, 3 is quite enough. SEE: bilateralFilter, guidedFilter, amFilter
-
dtFilter
public static void dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode)
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.- Parameters:
guide
- guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.src
- filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination imagesigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.mode
- one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. SEE: bilateralFilter, guidedFilter, amFilter
-
dtFilter
public static void dtFilter(Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor)
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.- Parameters:
guide
- guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.src
- filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination imagesigmaSpatial
- \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.sigmaColor
- \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. filtering 2D signals in the article. SEE: bilateralFilter, guidedFilter, amFilter
-
createGuidedFilter
public static GuidedFilter createGuidedFilter(Mat guide, int radius, double eps)
Factory method, create instance of GuidedFilter and produce initialization routines.- Parameters:
guide
- guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.radius
- radius of Guided Filter.eps
- regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. For more details about Guided Filter parameters, see the original article CITE: Kaiming10 .- Returns:
- automatically generated
-
guidedFilter
public static void guidedFilter(Mat guide, Mat src, Mat dst, int radius, double eps, int dDepth)
Simple one-line Guided Filter call. If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.- Parameters:
guide
- guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.src
- filtering image with any numbers of channels.dst
- output image.radius
- radius of Guided Filter.eps
- regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter.dDepth
- optional depth of the output image. SEE: bilateralFilter, dtFilter, amFilter
-
guidedFilter
public static void guidedFilter(Mat guide, Mat src, Mat dst, int radius, double eps)
Simple one-line Guided Filter call. If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.- Parameters:
guide
- guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.src
- filtering image with any numbers of channels.dst
- output image.radius
- radius of Guided Filter.eps
- regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. SEE: bilateralFilter, dtFilter, amFilter
-
createAMFilter
public static AdaptiveManifoldFilter createAMFilter(double sigma_s, double sigma_r, boolean adjust_outliers)
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.- Parameters:
sigma_s
- spatial standard deviation.sigma_r
- color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.adjust_outliers
- optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. For more details about Adaptive Manifold Filter parameters, see the original article CITE: Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.- Returns:
- automatically generated
-
createAMFilter
public static AdaptiveManifoldFilter createAMFilter(double sigma_s, double sigma_r)
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.- Parameters:
sigma_s
- spatial standard deviation.sigma_r
- color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. original paper. For more details about Adaptive Manifold Filter parameters, see the original article CITE: Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.- Returns:
- automatically generated
-
amFilter
public static void amFilter(Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r, boolean adjust_outliers)
Simple one-line Adaptive Manifold Filter call.- Parameters:
joint
- joint (also called as guided) image or array of images with any numbers of channels.src
- filtering image with any numbers of channels.dst
- output image.sigma_s
- spatial standard deviation.sigma_r
- color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.adjust_outliers
- optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. SEE: bilateralFilter, dtFilter, guidedFilter
-
amFilter
public static void amFilter(Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r)
Simple one-line Adaptive Manifold Filter call.- Parameters:
joint
- joint (also called as guided) image or array of images with any numbers of channels.src
- filtering image with any numbers of channels.dst
- output image.sigma_s
- spatial standard deviation.sigma_r
- color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. original paper. Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. SEE: bilateralFilter, dtFilter, guidedFilter
-
jointBilateralFilter
public static void jointBilateralFilter(Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int borderType)
Applies the joint bilateral filter to an image.- Parameters:
joint
- Joint 8-bit or floating-point, 1-channel or 3-channel image.src
- Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.dst
- Destination image of the same size and type as src .d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace
- Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .borderType
- Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. SEE: bilateralFilter, amFilter
-
jointBilateralFilter
public static void jointBilateralFilter(Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace)
Applies the joint bilateral filter to an image.- Parameters:
joint
- Joint 8-bit or floating-point, 1-channel or 3-channel image.src
- Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.dst
- Destination image of the same size and type as src .d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace
- Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. SEE: bilateralFilter, amFilter
-
bilateralTextureFilter
public static void bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter, double sigmaAlpha, double sigmaAvg)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.- Parameters:
src
- Source image whose depth is 8-bit UINT or 32-bit FLOATdst
- Destination image of the same size and type as src.fr
- Radius of kernel to be used for filtering. It should be positive integernumIter
- Number of iterations of algorithm, It should be positive integersigmaAlpha
- Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.sigmaAvg
- Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. SEE: rollingGuidanceFilter, bilateralFilter
-
bilateralTextureFilter
public static void bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter, double sigmaAlpha)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.- Parameters:
src
- Source image whose depth is 8-bit UINT or 32-bit FLOATdst
- Destination image of the same size and type as src.fr
- Radius of kernel to be used for filtering. It should be positive integernumIter
- Number of iterations of algorithm, It should be positive integersigmaAlpha
- Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. value is negative, it is automatically calculated as described in the paper. SEE: rollingGuidanceFilter, bilateralFilter
-
bilateralTextureFilter
public static void bilateralTextureFilter(Mat src, Mat dst, int fr, int numIter)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.- Parameters:
src
- Source image whose depth is 8-bit UINT or 32-bit FLOATdst
- Destination image of the same size and type as src.fr
- Radius of kernel to be used for filtering. It should be positive integernumIter
- Number of iterations of algorithm, It should be positive integer a bigger value means sharper transition. When the value is negative, it is automatically calculated. value is negative, it is automatically calculated as described in the paper. SEE: rollingGuidanceFilter, bilateralFilter
-
bilateralTextureFilter
public static void bilateralTextureFilter(Mat src, Mat dst, int fr)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.- Parameters:
src
- Source image whose depth is 8-bit UINT or 32-bit FLOATdst
- Destination image of the same size and type as src.fr
- Radius of kernel to be used for filtering. It should be positive integer a bigger value means sharper transition. When the value is negative, it is automatically calculated. value is negative, it is automatically calculated as described in the paper. SEE: rollingGuidanceFilter, bilateralFilter
-
bilateralTextureFilter
public static void bilateralTextureFilter(Mat src, Mat dst)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.- Parameters:
src
- Source image whose depth is 8-bit UINT or 32-bit FLOATdst
- Destination image of the same size and type as src. a bigger value means sharper transition. When the value is negative, it is automatically calculated. value is negative, it is automatically calculated as described in the paper. SEE: rollingGuidanceFilter, bilateralFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter, int borderType)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src.d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace
- Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .numOfIter
- Number of iterations of joint edge-preserving filtering applied on the source image.borderType
- Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src.d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace
- Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .numOfIter
- Number of iterations of joint edge-preserving filtering applied on the source image. Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src.d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace
- Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst, int d, double sigmaColor)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src.d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor
- Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst, int d)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src.d
- Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
rollingGuidanceFilter
public static void rollingGuidanceFilter(Mat src, Mat dst)
Applies the rolling guidance filter to an image. For more details, please see CITE: zhang2014rolling- Parameters:
src
- Source 8-bit or floating-point, 1-channel or 3-channel image.dst
- Destination image of the same size and type as src. it is computed from sigmaSpace . farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. SEE: jointBilateralFilter, bilateralFilter, amFilter
-
createFastBilateralSolverFilter
public static FastBilateralSolverFilter createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver.num_iter
- number of iterations used for solver, 25 is usually enough.max_tol
- convergence tolerance used for solver. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.- Returns:
- automatically generated
-
createFastBilateralSolverFilter
public static FastBilateralSolverFilter createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver.num_iter
- number of iterations used for solver, 25 is usually enough. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.- Returns:
- automatically generated
-
createFastBilateralSolverFilter
public static FastBilateralSolverFilter createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.- Returns:
- automatically generated
-
createFastBilateralSolverFilter
public static FastBilateralSolverFilter createFastBilateralSolverFilter(Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma)
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.- Returns:
- automatically generated
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver.num_iter
- number of iterations used for solver, 25 is usually enough.max_tol
- convergence tolerance used for solver. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver.num_iter
- number of iterations used for solver, 25 is usually enough. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.lambda
- smoothness strength parameter for solver. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.sigma_chroma
- parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.sigma_luma
- parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image.sigma_spatial
- parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
fastBilateralSolverFilter
public static void fastBilateralSolverFilter(Mat guide, Mat src, Mat confidence, Mat dst)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.confidence
- confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.dst
- destination image. For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
-
createFastGlobalSmootherFilter
public static FastGlobalSmootherFilter createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color, double lambda_attenuation, int num_iter)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter.lambda_attenuation
- internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.num_iter
- number of iterations used for filtering, 3 is usually enough. For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.- Returns:
- automatically generated
-
createFastGlobalSmootherFilter
public static FastGlobalSmootherFilter createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color, double lambda_attenuation)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter.lambda_attenuation
- internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.- Returns:
- automatically generated
-
createFastGlobalSmootherFilter
public static FastGlobalSmootherFilter createFastGlobalSmootherFilter(Mat guide, double lambda, double sigma_color)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter. it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.- Returns:
- automatically generated
-
fastGlobalSmootherFilter
public static void fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation, int num_iter)
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination image.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter.lambda_attenuation
- internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.num_iter
- number of iterations used for filtering, 3 is usually enough.
-
fastGlobalSmootherFilter
public static void fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation)
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination image.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter.lambda_attenuation
- internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
-
fastGlobalSmootherFilter
public static void fastGlobalSmootherFilter(Mat guide, Mat src, Mat dst, double lambda, double sigma_color)
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.- Parameters:
guide
- image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.dst
- destination image.lambda
- parameter defining the amount of regularizationsigma_color
- parameter, that is similar to color space sigma in bilateralFilter. it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
-
l0Smooth
public static void l0Smooth(Mat src, Mat dst, double lambda, double kappa)
Global image smoothing via L0 gradient minimization.- Parameters:
src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.dst
- destination image.lambda
- parameter defining the smooth term weight.kappa
- parameter defining the increasing factor of the weight of the gradient data term. For more details about L0 Smoother, see the original paper CITE: xu2011image.
-
l0Smooth
public static void l0Smooth(Mat src, Mat dst, double lambda)
Global image smoothing via L0 gradient minimization.- Parameters:
src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.dst
- destination image.lambda
- parameter defining the smooth term weight. For more details about L0 Smoother, see the original paper CITE: xu2011image.
-
l0Smooth
public static void l0Smooth(Mat src, Mat dst)
Global image smoothing via L0 gradient minimization.- Parameters:
src
- source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.dst
- destination image. For more details about L0 Smoother, see the original paper CITE: xu2011image.
-
FastHoughTransform
public static void FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange, int op, int makeSkew)
Calculates 2D Fast Hough transform of an image. The function calculates the fast Hough transform for full, half or quarter range of angles.- Parameters:
src
- automatically generateddst
- automatically generateddstMatDepth
- automatically generatedangleRange
- automatically generatedop
- automatically generatedmakeSkew
- automatically generated
-
FastHoughTransform
public static void FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange, int op)
Calculates 2D Fast Hough transform of an image. The function calculates the fast Hough transform for full, half or quarter range of angles.- Parameters:
src
- automatically generateddst
- automatically generateddstMatDepth
- automatically generatedangleRange
- automatically generatedop
- automatically generated
-
FastHoughTransform
public static void FastHoughTransform(Mat src, Mat dst, int dstMatDepth, int angleRange)
Calculates 2D Fast Hough transform of an image. The function calculates the fast Hough transform for full, half or quarter range of angles.- Parameters:
src
- automatically generateddst
- automatically generateddstMatDepth
- automatically generatedangleRange
- automatically generated
-
FastHoughTransform
public static void FastHoughTransform(Mat src, Mat dst, int dstMatDepth)
Calculates 2D Fast Hough transform of an image. The function calculates the fast Hough transform for full, half or quarter range of angles.- Parameters:
src
- automatically generateddst
- automatically generateddstMatDepth
- automatically generated
-
weightedMedianFilter
public static void weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma, int weightType, Mat mask)
Applies weighted median filter to an image. For more details about this implementation, please see CITE: zhang2014100+ the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. SEE: medianBlur, jointBilateralFilter- Parameters:
joint
- automatically generatedsrc
- automatically generateddst
- automatically generatedr
- automatically generatedsigma
- automatically generatedweightType
- automatically generatedmask
- automatically generated
-
weightedMedianFilter
public static void weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma, int weightType)
Applies weighted median filter to an image. For more details about this implementation, please see CITE: zhang2014100+ the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. SEE: medianBlur, jointBilateralFilter- Parameters:
joint
- automatically generatedsrc
- automatically generateddst
- automatically generatedr
- automatically generatedsigma
- automatically generatedweightType
- automatically generated
-
weightedMedianFilter
public static void weightedMedianFilter(Mat joint, Mat src, Mat dst, int r, double sigma)
Applies weighted median filter to an image. For more details about this implementation, please see CITE: zhang2014100+ the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. SEE: medianBlur, jointBilateralFilter- Parameters:
joint
- automatically generatedsrc
- automatically generateddst
- automatically generatedr
- automatically generatedsigma
- automatically generated
-
weightedMedianFilter
public static void weightedMedianFilter(Mat joint, Mat src, Mat dst, int r)
Applies weighted median filter to an image. For more details about this implementation, please see CITE: zhang2014100+ the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. SEE: medianBlur, jointBilateralFilter- Parameters:
joint
- automatically generatedsrc
- automatically generateddst
- automatically generatedr
- automatically generated
-
createRFFeatureGetter
public static RFFeatureGetter createRFFeatureGetter()
-
createStructuredEdgeDetection
public static StructuredEdgeDetection createStructuredEdgeDetection(java.lang.String model, RFFeatureGetter howToGetFeatures)
-
createStructuredEdgeDetection
public static StructuredEdgeDetection createStructuredEdgeDetection(java.lang.String model)
-
GradientDericheY
public static void GradientDericheY(Mat op, Mat dst, double alpha, double omega)
Applies Y Deriche filter to an image. For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf- Parameters:
op
- automatically generateddst
- automatically generatedalpha
- automatically generatedomega
- automatically generated
-
GradientDericheX
public static void GradientDericheX(Mat op, Mat dst, double alpha, double omega)
Applies X Deriche filter to an image. For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf- Parameters:
op
- automatically generateddst
- automatically generatedalpha
- automatically generatedomega
- automatically generated
-
createEdgeDrawing
public static EdgeDrawing createEdgeDrawing()
Creates a smart pointer to a EdgeDrawing object and initializes it- Returns:
- automatically generated
-
-