Posts

Use Open cv to analyze the palm line main line extraction! How to do scientific fortune telling?

Image
  Use Open cv to analyze the palm line main line extraction! How to do scientific fortune telling? We will use Python and OpenCV libraries in this article to find the main lines in our palms. First, let's read the original image: import cv2 image = cv2.imread( r'G:\PARAS\palm.jpeg' ) cv2.imshow( "palm" ,image) #to view the palm in python cv2.waitKey( 0 ) #Now we will use a filtering algorithm called Canny Edge Detector # to find the palm print. For different images, we need to change the parameters accordingly. gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) #Now we convert the image to grayscale: edges = cv2.Canny(gray, 60 , 65 , apertureSize = 3 ) cv2.imshow( "edges" ,edges) cv2.waitKey( 0 ) #Now we will reverse the color to ensure that the recognized line is black: edges = cv2.bitwise_not(edges) cv2.imshow( "change black and white" ,edges) cv2.waitKey( 0 ) #Now we mix the image above with the original image. cv2.imwrite( "palmlines.jpg&qu

Python – Face detection and sending notification

Image
  Python – Face detection and sending notification A simple method is implemented using python how to detect human face and after detecting sends notifications to the user. If face is not recognised it does not send notifications to the owner. WE WILL USE: OpenCV:  OpenCV is a huge open-source library for computer vision, machine learning, and image processing. OpenCV supports a wide variety of programming languages like Python, C++, Java, etc. It can process images and videos to identify objects, faces, or even the handwriting of a human. When it is integrated with various libraries, such as Numpy which is a highly optimized library for numerical operations, then the number of weapons increases in your Arsenal i.e whatever operations one can do in Numpy can be combined with OpenCV.This OpenCV tutorial will help you learn the Image-processing from Basics to Advance, like operations on Images, Videos using a huge set of Opencv-programs and projects. Sinch  :  Sinch  is used to send mess

Face Detection using Python and OpenCV with webcam

Image
  Face Detection using Python and OpenCV with webcam How to use : Create a directory in your pc and name it (say project) Create two python files named create_data.py and face_recognize.py, copy the first source code and second source code in it respectively. Copy haarcascade_frontalface_default.xml to the project directory, you can get it in opencv or from here . You are ready to now run the following codes. create_data.py # Creating database # It captures images and stores them in datasets # folder under the folder name of sub_data import cv2, sys, numpy, os haar_file = 'haarcascade_frontalface_default.xml' # All the faces data will be # present this folder datasets = 'G:\PARAS\datasets' //make datasets as new folder like i did in this path then sub # These are sub data sets of folder, # for my faces I've used my name you can # change the label here akhi is a sub folder insde datasets where web image will store by webcamera sub_data = 'akhi' path = os.pat

How to Detect Shapes in Images in Python using OpenCV

Image
  How to Detect Shapes in Images in Python using OpenCV import numpy as np import matplotlib.pyplot as plt import cv2 import sys # read the image from arguments image = cv2.imread( r'G:\PARAS\anuradha.png' ) # convert to grayscale grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # perform edge detection edges = cv2.Canny(grayscale, 30 , 100 ) # detect lines in the image using hough lines technique lines = cv2.HoughLinesP(edges, 1 , np.pi/ 180 , 60 , np.array([]), 50 , 5 ) # iterate over the output lines and draw them for line in lines: for x1, y1, x2, y2 in line: cv2.line(image, (x1, y1), (x2, y2), color =( 20 , 220 , 20 ), thickness = 3 ) # show the image plt.imshow(image) plt.show()

Text Detection and Extraction using OpenCV and OCR

  Text Detection and Extraction using OpenCV and OCR Applying OCR: Loop through each contour and take the x and y coordinates and the width and height using the function  cv2.boundingRect() . Then draw a rectangle in the image using the function cv2.rectangle() with the help of obtained x and y coordinates and the width and height. There are 5 parameters in the cv2.rectangle(), the first parameter specifies the input image, followed by the x and y coordinates (starting coordinates of the rectangle), the ending coordinates of the rectangle which is (x+w, y+h), the boundary color for the rectangle in RGB value and the size of the boundary. Now crop the rectangular region and then pass it to the tesseract to extract the text from the image. Then we open the created text file in append mode to append the obtained text and close the file Finding Contours: cv2.findContours()  is used to find contours in the dilated image. There are three arguments in cv.findContours(): the source image, the

Extracting text from images with Tesseract OCR, OpenCV, and Python

Image
In the end we will see, it can be concluded that Tesseract is perfect for scanning clean documents and you can easily convert  the image’s text from OCR to word,  pdf to word,  or  to any other required format.  It has pretty high accuracy and font variability. This is very useful in case of institutions where a lot of documentation is involved such as government offices, hospitals, educational institutes, etc. In the current release 4.0, Tesseract supports OCR based deep learning that is significantly more accurate. You can access the code file and input image  here  to create your own OCR task. Try replicating this task and achieve the desirable results, happy coding! Coding Here, I will use the following sample receipt image: First part is image thresholding. Following is the code that you can use for thresholding: pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files/Tesseract-OCR/tesseract.exe' # your path may be different For Windows Only 1 - You need to have Tesser