Advances in Science and Technology
Vol. 130
Vol. 130
Advances in Science and Technology
Vol. 129
Vol. 129
Advances in Science and Technology
Vol. 128
Vol. 128
Advances in Science and Technology
Vol. 127
Vol. 127
Advances in Science and Technology
Vol. 126
Vol. 126
Advances in Science and Technology
Vol. 125
Vol. 125
Advances in Science and Technology
Vol. 124
Vol. 124
Advances in Science and Technology
Vol. 123
Vol. 123
Advances in Science and Technology
Vol. 122
Vol. 122
Advances in Science and Technology
Vol. 121
Vol. 121
Advances in Science and Technology
Vol. 120
Vol. 120
Advances in Science and Technology
Vol. 119
Vol. 119
Advances in Science and Technology
Vol. 118
Vol. 118
Advances in Science and Technology Vol. 124
Title:
Proceedings: IoT, Cloud and Data Science
Subtitle:
Selected peer-reviewed full text papers from the International Research Conference on IoT, Cloud and Data Science (IRCICD'22)
Edited by:
Dr. S. Prasanna Devi, Dr. G. Paavai Anand, Dr. M. Durgadevi, Dr. Golda Dilip and Dr. S. Kannadhasan
ToC:
Paper Title Page
Abstract: Recently much attention has been paid to the design of intelligent and natural user-computer interfaces. Hand Gesture Recognition systems has been developed continuously as its ability to interact with the machines. Now-a-days the news of metaverse ecosystem has increased the number of system in gesture recognition. Gestures are used to communicate with the PCs in a virtual environment. In this project Hand gestures are used to communicate information non-verbally which are free of expression to do a particular task. Here the hand gestures are recognized by using hand skeleton recognition using mediapipe library in python. In this project, the PC camera will record live video and recognizes the hand gestures based on which a particular functionality will take place. This project will present virtual keyboard, calculator and control system’s volume using hand gestures recognition technique by coding in Python using the OpenCV library.
3
Abstract: Deaf or mute persons frequently use sign language to communicate, but it takes a lot of practice to learn. Principal mode of communication of the Hard-of-Hearing and Deaf community is sign language. Autism, Apraxia of speech, Cerebral Palsy, and Down syndrome are just a few of the disorders that may benefit from sign language. We're utilizing ASL (American Sign Language) for this project. Although ASL uses the same alphabet as English, it is not a dialect of English. American Sign Language is a separate language with its own linguistic framework. Signs are not expressed in the same order as words are in English. This is due to sign language's distinct grammar and visual aspect. In the United States, around half a million people use ASL. We will develop and implement a mobile application that will serve as a translation system to enable people communicate more efficiently using signs in this project. We will demonstrate a real-time sign language recognition system that uses existing datasets to transform a video of a user's signs into text.
11
Abstract: There is a communication lag between deaf-mutes and normal people. To overcomethat, we are providing information access and services to deaf-mute people in Indian Sign Language (ISL) and developing a flexible project that can be enlarged to capture the entire lexicon of Indian Sign Language via physical gestures like hand expressions and non-manual signs like facial expressions by developing and building a training model using machine learning algorithms. Sign language recognition uses image-based manual and non-manual gestures. Here we used figure recognition to identify manual and non-manual gestures. Finding expression gestures and analyzing finger movements to determine what the deaf-dumb individual is saying. In Python, the MediaPipe recognizes the hand signs and facial gestures of a person. These modules were developed to assist people with non-identical motions. This paper presents figure identification of Indian Sign Language via hand and facial gestures, as well asits integration with a chatbot as transcript output.
20
Abstract: Robotics is a field that has actively been working to reduce the involvement of humans in dangerous environments by automating tasks. In this paper, we propose a method to be able to remotely control a humanoid torso in a telekinetic method. The humanoid torso replicates the pose of the person controlling it remotely by detecting the pose of their arms through an input from an RGB camera in real-time using computer vision techniques based on machine learning algorithms. By detecting the pose, the humanoid’s joints (shoulders and elbows) are positioned to replicate the pose of the person controlling it. This task is achieved by mapping the positions of the joints of the person controlling the robot to a set of equations using vector algebra. Such a systemensures that the movements executed are not only oriented to the end-effector reaching the desired location, but it also ensures that the position of every part of the robot can be controlled to move in the required manner. This level of control eliminates the complexities of collision detection in teleoperated robotic systems and also increases the range of applications such a system can be used in efficiently.
28
Abstract: Brain tumors are developed as a result of unregulated and fast cell proliferation. It may result in death if not treated in the early stages. The imaging technology used to diagnose brain tumors is known as magnetic resonance imaging (MRI). Early detection of brain tumors is critical in medical practise in order to determine whether the tumor will progress to malignancy. For picture categorization, deep learning is a useful and effective method. Deep learning has been widely used in a variety of sectors, including medical imaging, because its application does not necessitate the expertise of a subject matter expert, but does necessitate a large amount of data and a variety of data in order to produce accurate classification results. The deep learning technique for image categorization is the convolutional neural network (CNN).In this research work , two different models are used to categorize brain tumors and their results were evaluated using performance metrics like accuracy and precision and the results were impressive
37
Abstract: Emotions play an important role in human life. Extracting human emotions is important because it conveys nonverbal communication cues that play an important role in interpersonal relations. In recent years, facial emotion detection has received massive attention, and many businesses have already utilized this technology to get real-time analytics and feedback from customers to help their business grow. Currently, we have to manually find playlists according to our mood, and it's time-consuming and stressful. Therefore, this process is made automated and simple in this project by proposing a recommendation system for emotion recognition that is capable of detecting the users' emotions and suggesting playlists that can improve their mood. Implementation of the proposed recommender system is performed using Caffemodel to detect faces and the MLP Classifier to detect facial emotions based on the KDEF dataset.
44
Abstract: As a biosafety precaution, the World Health Organization (WHO) introduced the wearing of face masks after the COVID-19 epidemic. This posed challenges to existing facial recognition systems, so this study was born. In this publication, we describe how to create a system that allows you to identify people from images, even when they wear a mask. The face detector in OpenCV is used in conjunction with Based on the Mobile NetV2 architecture, a classification model in this way, it is possible to determine whether the face is wearing a mask and where it is situated. To conduct face recognition, A Face Net model is used as a feature extractor and a multilayer feedforward perceptron is used for training facial recognition models using a collection of about 4000+ photographs. Of the images, 52.9 percent came with a face mask and 47.1 percent were without mask. The outcomes of the tests demonstration that determining whether or not someone is wearing a mask is 99.65% accurate. Face recognition accuracy for ten people wearing masks is 99.52 percent, whereas face recognition accuracy without masks is 99.96 percent.
53
Abstract: Pharmaceutical industries remain very profitable but defects in medicine vials are causing losses and adding extra overhead in quality management. In order to minimize these losses and overheads, companies need to find new ways of doing quality management for every vial produced. This paper presents a method for finding cracks on the vials using Agglomerative Clustering Technique. The technique successfully detects all types of cracks on the vials. The algorithm has achieved 100% accuracy in detection of cracks on the Pharmaceutical Vials and can have potential application in pharmaceutical industries in quality control.
60
Abstract: As a potential medium for informing blind people, the project remodels the visual world into the audio world. In computer vision, Obstacle detection is the most widely used field with incredible achievement. The obstacles and texts can be enlightened to the visually impaired people where obstacles detected from the frame are addressed by their names and converted to speech. The image-to-text framework is an advanced innovation that is utilized to get the message in the image which will be extremely helpful to read any content. The voice result of the input text is produced by evaluating the adjoining and the recurrence of events of the words by the system. It is used to find the distance to the object that the user wants. This application is compact and helpful to economical society as well as an efficient assistant for visually impaired people. Thus, ultimately this could increase the confidence level of the user and make him/her feel secure.
72