LAND DETECTION BY USING FACE RECOGNITION
ABSTRACT: –
a human face the human face stands as a critical biometric object in picture and video information databases of observation frameworks the attendance of students during a huge study hall is difficult to deal with by the ordinary framework since the time has come to devour and includes a high likelihood of mistakes during the technique for contributing information into the pc this is a frequently mechanized participation checking framework utilizing face recognition strategy the participation of the researcher was refreshed to the excel sheet after the understudys face has been perceived in this paper a gui-based program faces an identification and recognition framework was created during this undertaking it is frequently utilized as an access point by enrolling the staff or students of an organization with their faces and later recognizing individuals by catching their faces when they are entering or leaving the class room the framework is carried out on a work area with a graphical user interface at first it distinguishes the appearances inside the pictures that are gotten from a web camera of every one of the devices and working well to standthe tools in this method like opencv both ruby and python are open source this continuous gui-based face recognition and identification framework was created utilizing open source tools like opencv with python.
PROPOSED SYSTEM:-
When considering image quality, there are a lot of factors that affect the accuracy of a system. It is extremely important to use a variety of image pre-processing techniques to standardize the images you feed to the facial recognition system. Most facial recognition algorithms are extremely sensitive to lighting conditions, so if it has been trained to recognize a person when they are in a dark room, it probably won’t recognize them in a bright room, etc. This problem is referred to as illumination dependent, and there are many other problems, such as b coordinates consisting of consistency and angle heights left or above etc. For simplicity, this paper presents a face recognition system of one’s own face using grayscale images.Face classification assumes a fixed face scale, say 50×50 pixels. Since faces in an image can be smaller or larger, the classifier goes through the image multiple times to find faces at different scales. The post shows how easy it is to convert color images to grayscale, also called grayscale, and then use histogram equalization 9 as a very simple method of automatically standardizing the brightness and contrast of your face images. For better results you could use color face recognition, ideally with a color histogram adapted in hsv or other color space instead of rgb, or use more processing stages like edge enhancement, contour detection, motion detection, etc. This code will resize the images to a standard size , but it can change the aspect ratio of the face. We describe a method to resize an image while maintaining the same aspect ratio. OpenCV uses a type of face detector called a haar cascade classifier. Based on the image, which can come from a file or live video, the face detector examines each image location and classifies it as a face or an object. After face detection users give uploaded documents.
Face Detection:
Facial analysis is the present day and most state-of-the-art authentication and recognition technology. The proposed device makes use of a device referred to as FaceNet, which without delay learns a mapping of face images into a compact Euclidean space, with distances at once corresponding to a measure of face similarity. Once this house is created, duties such as face detection, verification, and grouping can be carried out without difficulty through the usage of well known FaceNet embedding methods as characteristic vectors. The gadget makes use of a Deep Convolutional Network trained to optimize the embedding at once alternatively than an intermediate bottleneck. stage as in preceding deep getting to know approaches. For training, it makes use of coarsely aligned matched/mismatched face patch triplets generated with the aid of a new online triplet mining method. The advantage of this method is a lot greater rendering efficiency. (LFW) the device achieves a new file accuracy of 99.63%.Receives a rating of 95.12% on YouTube Faces DB. This approach is primarily based on studying a Euclidean embedding with the aid of picture the usage of a deep convolutional network. Limitation: The machine requires a massive model and high-end CPU, it additionally requires a lengthy learning curve.
Face recognition:
Face recognition is the most popular biometric solution for online authentication systems. OpenCV is a famous computer vision library started by Intel in 1999. OpenCV implements three face recognition algorithms, including Eigenface, Fisherface, and LBPH (Local BinaryPattern Histograms) Face Detection. To recognize faces, these algorithms use the Haar cascade classification technique introduced by Paul and Michael. In our proposed methodology, we take a photo of aStudents as input and use HOG techniques to detect faces in the image. Then calculate the 68 reference points for the identified image. Faces that are oriented and appear differentlyUnlike a computer, they can belong to the same person and these characters can be used to easily identify them. Finally, the identified photos are directly compared to previously learned faces stored in our database.Faces with a deep neural network. We train a classifier to determine which known student is most similar based on measurements of a new test image. The output of the classifier would be the name of a student. The number of faces in the photo is also counted. The system compares the selected facial features from the image and compares them to already known faces. The OpenFace package uses a deep neural network to render (or embed) the face in a 128-dimensional unit hypersphere. In contrast to other representations of faces, this embedding has the nice property that it creates a greater distance between two face inserts meaning the faces are unlikely to be from the same person. This property makes clustering, similarity detection, and classification tasks easier than other face recognition techniques where the Euclidean distance between features is not significant. During the training part of the OpenFace pipeline,500,000 images are passed through the neural network. OpenFace trains these images to generate 128 face embeds that represent a generic face. OpenFace uses Google’s FaceNet architecture for feature extraction and uses a triplet loss function to test how accurately the neural network classifies a face.
MODULES:-
WE ARE GOING TO USE THE
- detectMultiscale: Module from OpenCV to create a rectangle with coordinates (x,y,w,h) around the face detected in the image.
- scaleFactor: The value shows how much the image size is reduced at each image scale. A small value uses a smaller step for downscaling. This allows the algorithm to find the face.
- minNeighbors: It specifies how many “neighbors” each applicant rectangle should have. A larger value results in small detections but it detects higher quality in an image.
- minSize: The minimum image size. By default, it is (30,30). A Smaller face in the image is best to adjust the minSize value lower.
APPLICATION:-
The purpose of these programs is to help users perform these exercises independently. Feedback improvement can be increased by designing targeted actions and specific suggestions regarding the body part used and the weight of the equipment. Exercises performed with proper form can be visualized using a graphical simulation to highlight the user’s error and how to improve it..
The system is a web application that runs on a laptop. The user needs enough space to perform the exercise and position the camera so that the user’s entire body sits at the camera angle.
there are hundreds of different exercise videos on the internet that contain short exercise videos for different exercises .
.The purpose of these programs is to help users perform these exercises separately we present the quantitative and qualitative results of the pose trainer on four different free movements with dumbbells exercises biceps curl front shrug shrug and press on standing shoulders for each exercise we use both a geometric-rich heuristic approach and a machine learning approach using dynamic time warping.
Such a system can replace personal trainers for the correction of body posture and the need for their constant attention when performing exercises or exercise positions
The constant need for attention of personal trainers in gyms and exercise classes while performing exercises is attracted to AI Fitness Tracker. The constant monitoring of users’ body and joint movement helps users maintain proper posture, which is one of the most fundamental parts of exercise. Current methodology primarily focuses on the amount of time the user exercises rather than the correct posture of the user, instead focusing on the amount of time the exercise is performed.
HARDWARE AND SOFTWARE REQUIREMENTS:-
HARDWARE:-
- Linux: GNOME or KDE desktop GNU C Library (glibc) 2.15 or later, 2 GB RAM minimum,
- 4 GB RAM recommended, 1280 x 800 minimum screen resolution.
- Windows: Microsoft R Windows R 8/7/Vista (32 or 64-bit) 2 GB RAM minimum, 4 GB RAM
- recommended, 1280 x 800 minimum screen resolution, Intel R processor with support for Intel R
- VT-x, Intel R EM64T (Intel R 64) Execute Disable (XD) Bit functionality
SOFTWARE:-
- Windows Operating System.
- MySQL
- Python
- Anaconda
- Spyder, Jupyter notebook, Flask