-
Fields Field Description org.opencv.core.CvType.CV_USRTYPE1 please useCvType.CV_16F
-
Methods Method Description org.opencv.aruco.Aruco.detectCharucoDiamond(Mat, List<Mat>, Mat, float, List<Mat>, Mat, Mat, Mat, Dictionary) Use CharucoDetector::detectDiamondsorg.opencv.aruco.Aruco.detectMarkers(Mat, Dictionary, List<Mat>, Mat, DetectorParameters, List<Mat>) Use class ArucoDetector::detectMarkersorg.opencv.aruco.Aruco.drawPlanarBoard(Board, Size, Mat, int, int) Use Board::generateImageorg.opencv.aruco.Aruco.estimatePoseBoard(List<Mat>, Mat, Board, Mat, Mat, Mat, Mat, boolean) Use cv::solvePnPorg.opencv.aruco.Aruco.estimatePoseSingleMarkers(List<Mat>, float, Mat, Mat, Mat, Mat, Mat, EstimateParameters) Use cv::solvePnPorg.opencv.aruco.Aruco.getBoardObjectAndImagePoints(Board, List<Mat>, Mat, Mat, Mat) Use Board::matchImagePointsorg.opencv.aruco.Aruco.interpolateCornersCharuco(List<Mat>, Mat, Mat, CharucoBoard, Mat, Mat, Mat, Mat, int) Use CharucoDetector::detectBoardorg.opencv.aruco.Aruco.refineDetectedMarkers(Mat, Board, List<Mat>, Mat, List<Mat>, Mat, Mat, float, float, boolean, Mat, DetectorParameters) Use class ArucoDetector::refineDetectedMarkersorg.opencv.aruco.Aruco.testCharucoCornersCollinear(CharucoBoard, Mat) Use CharucoBoard::checkCharucoCornersCollinearorg.opencv.core.Core.getThreadNum() Current implementation doesn't corresponding to this documentation. The exact meaning of the return value depends on the threading framework used by OpenCV library:-
TBB
- Unsupported with current 4.1 TBB release. Maybe will be supported in future. -
OpenMP
- The thread number, within the current team, of the calling thread. -
Concurrency
- An ID for the virtual processor that the current context is executing on (0 for master thread and unique number for others, but not necessary 1,2,3,...). -
GCD
- System calling thread's ID. Never returns 0 inside parallel region. -
C=
- The index of the current parallel task. SEE: setNumThreads, getNumThreads
org.opencv.dnn.Dnn.getInferenceEngineBackendType() org.opencv.dnn.Dnn.setInferenceEngineBackendType(String) org.opencv.dnn.Layer.run(List<Mat>, List<Mat>, List<Mat>) This method will be removed in the future release.org.opencv.dnn.Net.getLayer(String) Use int getLayerId(const String &layer)org.opencv.imgproc.Imgproc.linearPolar(Mat, Mat, Point, double, int) This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags) Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image c)"): \(\begin{array}{l} dst( \rho , \phi ) = src(x,y) \\ dst.size() \leftarrow src.size() \end{array}\) where \(\begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \\ \rho = Kmag \cdot \texttt{magnitude} (I) ,\\ \phi = angle \cdot \texttt{angle} (I) \end{array}\) and \(\begin{array}{l} Kx = src.cols / maxRadius \\ Ky = src.rows / 2\Pi \end{array}\)org.opencv.imgproc.Imgproc.logPolar(Mat, Mat, Point, double, int) This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG); Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image d)"): \(\begin{array}{l} dst( \rho , \phi ) = src(x,y) \\ dst.size() \leftarrow src.size() \end{array}\) where \(\begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \\ \rho = M \cdot log_e(\texttt{magnitude} (I)) ,\\ \phi = Kangle \cdot \texttt{angle} (I) \\ \end{array}\) and \(\begin{array}{l} M = src.cols / log_e(maxRadius) \\ Kangle = src.rows / 2\Pi \\ \end{array}\) The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth.org.opencv.text.Text.loadOCRHMMClassifierCNN(String) use loadOCRHMMClassifier insteadorg.opencv.text.Text.loadOCRHMMClassifierNM(String) loadOCRHMMClassifier instead -