Advances in Science and Technology
Vol. 130
Vol. 130
Advances in Science and Technology
Vol. 129
Vol. 129
Advances in Science and Technology
Vol. 128
Vol. 128
Advances in Science and Technology
Vol. 127
Vol. 127
Advances in Science and Technology
Vol. 126
Vol. 126
Advances in Science and Technology
Vol. 125
Vol. 125
Advances in Science and Technology
Vol. 124
Vol. 124
Advances in Science and Technology
Vol. 123
Vol. 123
Advances in Science and Technology
Vol. 122
Vol. 122
Advances in Science and Technology
Vol. 121
Vol. 121
Advances in Science and Technology
Vol. 120
Vol. 120
Advances in Science and Technology
Vol. 119
Vol. 119
Advances in Science and Technology
Vol. 118
Vol. 118
Advances in Science and Technology Vol. 124
Title:
Proceedings: IoT, Cloud and Data Science
Subtitle:
Selected peer-reviewed full text papers from the International Research Conference on IoT, Cloud and Data Science (IRCICD'22)
Edited by:
Dr. S. Prasanna Devi, Dr. G. Paavai Anand, Dr. M. Durgadevi, Dr. Golda Dilip and Dr. S. Kannadhasan
ToC:
Paper Title Page
Abstract: Medical attention is critical to living a healthy life. If you have a health concern, however, it is quite difficult to seek medical help. The idea is to create a medical chatbot that can assess symptoms and provide a list of illnesses the user might have using AI and other biometric parameters. In medical diagnosis, artificial intelligence aids in medical decision making, management, automation, administration, and workflows. It can be used to diagnose cancer, triage critical findings in medical imaging, flag acute abnormalities, assist radiologists in prioritizing life-threatening cases, diagnose cardiac arrhythmias, predict stroke outcomes, and aid in chronic disease management. Medical chatbots were created with the goal of lowering medical costs and increasing access to medical information. Some chatbots act as medical guides, assisting patients in becoming more conscious of their ailment and improving their overall health. Users will undoubtedly profit from chatbots if they can identify a variety of illnesses and provide the necessary information that may help the user to understand the predicament, he/she might be facing. The main idea is to create a preliminary diagnosis chatbot that allows patients to participate in medical research and provide a customized analysis report based on their symptoms
335
Abstract: People around the world use social media to communicate and share their perceptions about a variety of topics. Social media analysis is crucial to interacting, distributing, and stating people's opinions on various topics. Governments and organizations can take action on alarming issues more quickly with the help of such textual data investigation. The key purpose of this effort is to perform sentiment analysis of textual data regarding National Eligibility-cum Entrance Test (NEET), perform classification and determine how people feel about NEET. In this study, 11 different machine learning classifiers were used to analyze tweet sentiment, along with natural language processing (NLP). Tweepy is the python library which is used to get user opinion about NEET Exam. Annotating the data is accomplished using TextBlob and Vader. Text data is pre-processed with a natural language toolkit. The dataset downloaded from Twitter shows that unigram models perform well compared to bigram and trigram models. TF-IDF models are more accurate than count vectorizer which is based on word frequency. classifier achieves an average accuracy of 92%. Perceptron also receives the uppermost average accuracy of 91%. According to the data from the experiment, most people have a neutral opinion of NEET.
344
Abstract: With the explosion of unstructured textual data circulating the digital space in present times, there has been an increase in the necessity of developing tools that can perform automatic text summarization to allow people to get insights from them easily and extract significant and essential data using Automatic Text Summarizers. The readability of documents can be improved and the time spent on researching for information can be improved by the implementation of text summarization tools. In this project, extractive summarization will be performed on text recognized from scanned documents via Optical Character Recognition (OCR), using the TextRank algorithm which is an unsupervised text summarization technique for performing extractive text summarization.
355
Abstract: In this project, we create a fraudulent checker tool to detect fake job postings using NLP (Natural Language Processing) and ML approaches (Random Forest Classifiers, Logistic Regression, Support Vector Machines, and XGBoost Classifiers). These approaches will be compared and then combined into an ensemble model which is used for our job detector. The aim is to predict using machine learning for real or fake job prediction results with the highest accuracy. Dataset analysis is performed by supervised machine learning techniques (SMLT) and collects a variety of information such as variable identification, missing value handling, and data validation analysis. Data cleaning and preparation along with visualization are performed on the entire dataset. The ensemble model is created at the end using ML Algorithms like XGBoost, SVM, Logistic Regression, and Random Forest Classifier by choosing 4 of the best contributing features. The model produced at the end will be implemented in a Flask application for demonstration.
362
Abstract: During the epidemic, managing the flow of a large number of patients for consultation has been a tough game for hospitals or healthcare workers. It is becoming more difficult to contact a doctor considering the recent situation, especially in rural areas. It's obvious that well-designed and operated chatbots may actually be helpful for patients by advocating precautionary measures and cures, as well as taken to prevent harm inflicted by worry. This paper describes the development of a complicated computer science (AI) chatbot for advising prompt actions when they need to see a doctor. Moreover, offering a virtual assistant may suggest which sort of doctor to consult.
370
Abstract: Scientific data available on the internet is rarely labelled. Most popular research paper repository sites contain papers without any annotation for grouping data. Classification of text via words, sentences and even paragraphs has become a key resource for a lot of industries looking to help their computers understand human language – the next stage in Artificial Intelligence. Using valuable Computational Linguistics ideas, some industrial applications have been able to streamline their processes to effectively and efficiently process and interpret language data. Continuing in this trend, in this paper, we aim to effectively clustering scientific research papers into topic-based differentiators, in the most efficient manner. Using multiple algorithms that have revolutionized the industry in the previous years, we compute over 800,000 entries of scientific research articles across 200+ domains that have been uploaded to accurately predict domains for each of these articles. We use clustering techniques like the K-Means algorithm to derive the topics for these papers with an accuracy of nearly 80%. We also use BERT to create topic clusters that generate topics based on frequently occurring contexts within the text. Beyond BERT, we use offspring algorithms that tackle specific, niche issues that BERT does not account for. We also fine-tune the parameters of the algorithms used to generate over 50 stronger topics that more accurately define scientific articles.
378
Abstract: The speech accent demonstrates that accents are systematic instead of merely mistaken speech. This project allows detecting the demographic and linguistic backgrounds of the speakers by comparing different speech outputs with the speech accent archive dataset to work out which variables are key predictors of every accent. Given a recording of a speaker speaking a known script of English words, this project predicts the speaker’s language. This project aims to classify various sorts of accents, specifically foreign accents, by the language of the speaker. This project revolves round the detection of backgrounds of each individual using their speeches
392
Abstract: There are more number of movies has been released the user gets confusion, which movie is suit for them and difficult to choose, so to become easier the recommendation system comes into play if the user search a movie it gives a accurate result with similar various suggestion . The suggestion movies are given by the recommendation system by the user search e.g., if the movie is action, love, crime, drama or by the director the similar movies will be suggested. The recommendation systems are used to recommend movies using the user previews choice. The Sentiment Analysis which helps to analyse the users sentiments, which is based on their choice.In recent year, the sentimental analysis became one of most major things for many of the recommendation systems
398
Abstract: Stock analysis and forecasting is a very challenging study due to the unpredictable and volatile database environment. However, their patterns are often unique as they are influenced by many uncertainties, such as financial results of companies (Earnings per share), risk transactions, market sentiment, government policies, and conditions such as epidemics. Even though they are challenging our goal is to predict the accurate values within a shorter span of a dataset. In this paper we have compared and analyzed the best ML model that predicts the exact closing amount of the next few days, using three to four months of nifty50 Indian stock from Yahoo Finance. Five regression models are involved in this analysis, Linear Regression (LR), Decision Tree (DT), Support Vector Regression (SVR), SARIMAX (Integrated Seasonal Integrated Season with EXogenous features), Gated Recurrent Unit (GRU – deep learning). The performance metrics like RMSE (Root Mean Squared Error), MAE (Mean Absolute Error) and MAPE (Mean Absolute Percentage Error) are used. On the basis of our comparison, we would like to conclude that GRU provides a low error value in all three performance metrics and also gives accurate predictions compared to the other five regression models used.
409
Abstract: Time series data and its practical applications lie across diverse domains: Finance, Medicine, Environment, Education and more. Comprehensive analysis and optimized forecasting can help us understand the nature of the data and better prepare us for the future. Financial Time series data has been a heavily researched subject in the present and in the previous decades. Statistics, Machine Learning (ML) & Deep Learning (DL) models have been implemented to forecast the stock market and make data informed decisions. However, these methods have not been thoroughly explored, analysed in context of the Indian Stock Market. In this paper we attempt to implement evaluate the avant-garde statistical, machine learning methods for Financial Time Series Analysis & Forecasting on Indian Stock Market Data.
418