Department wise Listing | NUML Online Research Repository
List of Content
Back to Listing
Title Abstract Action(s)
DEEP LEARNING FOR INTRUSION DETECTION IN IOT BASED SMART HOMES Title: Deep Learning for Intrusion Detection in IOT based Smart Homes Scurrying growth in IOT has been alleviating the different fields like Health Care Units, Industrial Units, Smart Homes or Military and so is trending topic for research. However, with the emergence of IOT, there is also high risk of security violations. Security breach involved the different categories of attack, illegitimate access and other privacy risks in IOT systems. Therefore, different researches had been conducted to palliate Cyber-attacks by configuring Intrusion Detection in different scenarios but as attacks are also growing with the same rate therefore, more work is still demanded or expected. In the proposed study, the comparative analysis of different Anomaly Based Intrusion detection system is conducted concerning existing state-of-the-art studies with respect to datasets, Machine Learning and Deep learning models. To overcome the limitations highlighted in existing work, the research proposed a novel solution for anomaly based intrusion detection in IOT with increased performance, lessen overfitting/underfitting issues and generalizable in nature. To ensure high performance w.r.t. different evaluation metrics, hybridization of Machine learning and Deep Learning models LSTM, KNN and DT was done and implemented on real time dataset CIC-IDS-IOT2022. To avoid underfitting/overfitting issues, feature selection and hyperparameter tuning was implemented. To check its impact, same solution was tested on benchmark dataset UNSW-NB15. Google Colab and python were used as a platform and language. Experiment results showed significant increase in performance while minimizing misclassification and other limitations in comparison with state-of-the-art solutions. Involvement of more datasets and hybridization of other ML/DL algorithms inspired by the proposed solution in real time IOT-IDS network is a future research goal.
DETECTION OF LUNG NODULES AND CLASSIFICATION USING DEEP LEARNING NETWORK One of the leading causes of cancer-related deaths around the world is lung cancer. The presence of lung nodules helps to detect lung cancer. Lung nodules are mostly small, rounded, spherical-shaped masses of vessels or tissues in the lung region. Accurate detection and classification of pulmonary nodules present in the computer tomography (CT) scan images are one of the major and complex problems. These lung nodules vary in size and shape and most of the time they are interlinked. Due to their size and location, it is difficult to detect them through naked eyes in the CT scan images. To address this problem, many researchers have used machine learning and computer vision-based techniques but these studies mostly do not consider the small size of lung nodules and ignore them. Moreover, these studies also suffer from the false-positive ratio which greatly affects the accuracy of the system. In this study, I have developed a deep learning network model VGG16 for accurate detection and classification of pulmonary nodules by considering all sizes of nodules from small sizes to large ones. Furthermore, the false-positive ratio is also improved by using unbiased data. VGG16 model stands at number one in terms of detection and it was unbeatable to date. This technique involved the steps of lung nodules image data acquisition, preprocessing of the images, data augmentation, and feature extraction using a deep convolutional neural network. After that, the deep CNN model is trained and classification of pulmonary nodules into cancerous and non-cancerous has been performed. This research work is tested and evaluated on LIDC-IDRI openly available dataset. The experimental work shows that the DCNN model VGG 16 achieved better performance with an accuracy of 93.55%, recall of 93.54%, and precision of 87.15% respectively which is better than the results gained by a previous study in this domain 91.60%
ANONYMITY ASSURANCE USING EFFICINET PSEUDONYM CONSUMPTION IN INTERNET OF VEHICLES Internet of vehicles (Iov) is an emerging technology that allows vehicles to travel while communicating with other vehicles, pedestrians, OBU, cloud and RSU. This intelligent transport system (ITS) has increased road safety while road accidents have decreased to a large extent. Through vehicles communication, surrounding vehicles get information about each other’s location, position and velocity to avoid accidents and road congestion. When vehicles travel on roads they share their information with neighbor vehicles through beacon messages that includes their position, speed, acceleration. As this technology is governing, there is a huge security risk that makes the information of vehicles vulnerable. Any adversary after knowing information about vehicles can easily harm vehicles or can use this information for any negative purpose, that’s why security of vehicles is an important concern that cannot be neglected. For security purpose, vehicles change their pseudonym, but only frequently changing pseudonym is not enough because an attacker can link previous pseudonym with new one to get information. To resolve security issues and enhance vehicles anonymity there are many techniques to change pseudonym efficiently but some of them have high pseudonym consumption like Cooperative pseudonym change scheme based on number of neighbors (CPN) and WHISPER. High pseudonym consumption has bad impact upon performance of system and it also disturbs Qos because pseudonym needs high system overhead and memory it also needs that these pseudonym should be authenticated by certificate authority (CA). The proposed solution EPCP has low pseudonym consumption and reduces traceability ratio to maintain anonymity of vehicles. To check the effectiveness of proposed scheme EPCP, OMNet 5.0++, SUMO 0.25.0 and PREXT are used that was built upon Veins 4.4 version. The results showed that proposed scheme EPCP is better in consuming resources and providing protection against adversary attacks.
EFFICIENT INCENTIVE MANAGEMENT IN REPUTATION-AWARE MOBILE CROWD SENSING Title: Efficient Incentive Management in Reputation-Aware Mobile Crowd Sensing The revolution in internet of things (IOT) technology have made possible crowdsourcing-based content sharing such as mobile crowd sensing (MCS), which aims to collect content from mass users and share it with participants. The content sharing is especially attractive because, users act as both content provider and user while shared content help in service providing or gaining. In previous researches, the main problem identified is that MWs may give false reporting by sharing low-quality reported data to reduce the effort required and gain reputation. Task related false reporting improved by hiring enough MWs for a task to evaluate the truth worthiness and acceptance of information but there are budget constraints on it. The monetary rewards are used to motivate the data collectors and to encourage the participants to take part in the network activities. As mobile workers are, the main entity to provide services so rewards are given based on reputation system also made mobile workers work efficiency more important in Mobile crowd sensing (MCS). The incentives given to mobile workers (MW) based on reputation play a dramatic increase in service usage and provide a motivation to mobile workers, and build a trust to use the service. In the underlying research, we identified that they have not considered the difficulty level of a task that result in to good reputation on performing a number of easy tasks. While a person, performing difficult task may gain less score for reputation. For this, we proposed four difficulty levels of tasks (DLT) for reputation evaluation on a crowd-sensing network, on which the MW reputation will be evaluated.
Efficient management of excessive messaging for emergency scenarios in internet of vehicles Title: Efficient Management of Excessive Messaging for Emergency Scenarios in Internet of Vehicles Due to the vast range of applications in numerous fields, the internet of vehicles (IoVs) has recently sparked a lot of attention. For supplying various services, these applications rely on the latest vehicle information. Constant message broadcasts by many vehicles, on the contrary, may not only overwhelm a centralized server but also produce enough traffic that is incompatible with continuous service, particularly in emergencies. Vehicles communicate and send messages for better conveyance. Some messages inform other vehicles about accidents or other unpleasant situations. In prior work, vehicles conveyed messages to one hop. There is a high probability that such a vehicle is not connected with the RSU, so vehicles cannot transfer emergency messages to other vehicles, and vehicles in that area cannot be aware of an emergency. Efficient management of excessive messaging (EEMS) for an emergency scenario in the internet of vehicles is a fog-assisted congestion avoidance strategy for IoV presented in this study. As in the previous research paper, they only implemented just one-hope neighbors to find the closest RSU for relaying an emergency message. However, suppose your one-hop neighbors do not discover the RSU. In that case, the message is likely to be lost and not sent to other vehicles in emergency conditions, making the situation extremely dangerous. When we enhance this one-hop neighbor to more than 2 to 3-hop neighbors, we can more easily locate the RSU and send out emergency signals to the whole network. Unlike most previous systems, EEMS takes advantage of fog computing benefits to reduce communication costs, congestion, and message delay and improve control services. Every vehicle must communicate with a fog server on regular terms, either straight or along intermediary nodes. In an emergency scenario, the fog server alerts approaching traffic to steady down and deploys rescue teams to offer needed assistance and organize patrol operations to clear the route. NS 2.35 simulations are used to validate the proposed scheme's performance. In terms of delay and communication cost, simulation findings show that EEMS reigns supreme over current methods.
DISSEMINATION OF EMERGENCY MESSAGES USING BEACONLESS APPROACH IN INTERNET OF VEHICLES The future transportation system demands an intelligent traffic system which aims to create a network of vehicles named Internet of Vehicle (IOV), achieved by connecting groups of vehicles to internet of things (IOT). The abilities of IOV must be utilized efficiently and effectively in order to meet the requirement of current traffic situations. It is mandatory to monitor, manage, and track the connected vehicles in IOV network. During accidental and alarming situation, accurate information delivered in timely manner is considered first priority. For this context, Vehicular Ad Hoc network (VANET) plays a vital role as an emerging technology plays a vital role as an emerging term that is deployed and implemented to reduce the risk of road accidents as well as to improve passenger comfort. In such context, exchange of emergency message through vehicle communication plays an important role for safety related applications. However, dissemination of Emergency Messages (EM) is a major concern. Since it gives rise to several issues such as broadcast storm, unwanted duplication that cause packet loss and poor system’s throughput. For this purpose, BEMD is proposed in which fuzzy logic decision making tool is design that evaluate the rebroadcast probability of a packet for Vehicle to Vehicle communication (V2V). A feedback mechanism is added by utilizing the current available resources in a network in order get acknowledge to make sure the emergency packet is received by the vehicles. BBEMD is compared with past schemes under simulators NS-2.3 and MOVE and BEMD well performs in term of traffic reachability, saved rebroadcast and average service delay.
TRUTH DISCOVERY FOR MOBILE WORKERS IN EDGE-ASSISTED MOBILE CROWDSENSING The proliferation of mobile phones has led to the rise of mobile crowdsensing systems. However, many of these systems rely on the deep cloud, which can be complex and challenging to scale. To improve the performance of crowdsensing at the edge cloud, truth-discovery methods are commonly employed. These methods typically involve updating either the truth or the weight associated with a user's task. While some edge cloud-based crowdsensing systems exist, they do not provide incentives to users based on their experience. In this report, we present a new approach to truth discovery and incentive-giving that considers both the user's experience and the accuracy of their submitted data. Our modified truth-discovery algorithm updates both the weight and truth concurrently, with greater incentives offered to users who have completed more tasks and whose submitted data is close to the estimated truth. We have conducted simulations to demonstrate the effectiveness of our proposed solution in improving the incentive mechanism for experienced users.
Queue assisted congestion avoidance scheme using software defined networks The internet of belongings is a topic that has gained popularity in recent decades and contains a significant amount of data. Utilizing and storing large amounts of data properly demands control. In this process, a substantial number of statistics is transmitted over the network. There is a chance that data packets will be dropped during transport and that buffer delays will occur. Continuously long delays at the network area can reduce its utility and cause congestion. To avoid this congestion, a hybrid community-managed design is introduced to fully utilize the community. Software Defined Congestion Control Plane is completely based on software-described network in which network statistics are collected by controller and by modifying transport layer parameters, behavior of end to end host are designated. It can also be used to alleviate security problems that could be referred as Feedback based Congestion Avoidance that can maintain a short router queue length with high network utilization. It automatically adjusts the congestion window according to feedback of remote control. The importance of this research study focuses on congestion in thenetwork links which is recognized by the flow table of the OpenFlow enabled switches. Software Defined Network tries to simplify the development and implementation of new congestion control algorithms by centralizing the control of the network in one entity called the controller. This research will lead to the exploitation of software based networks for the further improvement and management of various network components. We suggested a hybrid community-managed design to fully utilize the community in order to avoid this congestion. SDCCP is entirely based on software-described Networks that modifies delivery layer parameters. In this manner, we suggested a set of rules called FCA (comments-based fully Congestion Avoidance) that operates on SDCCP and completely changes the crowding window founded on feedback from the distant supervisor. FCA can maintain a short router queue length due to the threshold checks that are divided in small chunks. After every threshold chunk, average queue length is determined and decision is taken according to the residual buffer capacity. Due to time to time queue length threshold checks, proposed IAGRED schemes perform better results than previous schemes. The main contribution of the IAGRED scheme is towards packet loss and buffer delays. Proposed scheme decreases the packet loss and delays due to early detection which is clear from the results chapter with the help of different graphs. For Example from the Delay graph; at the start of packet arrival delay is tolerable in three schemes but as the time increased, proposed scheme behaves better than previous schemes.
REAL TIME OBJECT LOCALIZATION IN COMPLEX INDOOR ENVIRONMENT USING HYBRID WIRLESS TECHNOLOGIES Positioning refers to locating the actual position of an object with respect to some coordinates, i.e. two-dimensional (x, y), with reference to some existing known place. Positioning is further divided into two more categories, like indoor and outdoor positioning. For outdoor localization, the Global Positioning System (GPS) is already an existing solution that is not suitable for indoor environments due to different obstacles such as line of sight (LOS). In the case of indoor environments where the existing solutions are still not up to the mark. Indoor environments have complex and different obstacles like furniture, the presence of human objects, wireless equipment, light, and other physical obstacles that attenuate and degrade the received (RSS) signal strength, due to which position estimation accuracy is affected. To address this problem, different scholars used various technologies such as Bluetooth, wireless local area networks (WLAN), and ZigBee together with traditional and trigonometric methods as well as machine learning techniques to minimize the error and improve position estimation accuracy. In this research, we have proposed a real-time positioning system based on hybrid wireless technologies using existing machine learning models such as support vector machine (SVM), random forest (RF), and logistic regression (LR) that are applied to the Miskolc hybrid indoor localization dataset based on three wireless technologies: magnetometer, wireless local area networks (WLAN), and Bluetooth. Based on our simulation results using hybrid technologies, machine learning models give accuracies of 83.7%, 93.5%, and 98.7% with an error rate of 0.162m, 0.065 m, and 0.012m for logistic regression, support vector machine, and random forest, respectively. Experimental results and literature survey also validate that random forest (RF) achieved a high accuracy of 98.7% and a lower error rate of 0.012m as compared to the other machine learning models, showing a remarkable improvement compared to previous approaches.
An Improved Link-Quality Based Energy-Efficient Routing for Wireless Sensor Network-based Internet of Things The Internet of Things (IoT) has gained implausible prominence in today's era due to its wide range of applications in various fields such as smart cities, transportation, home automation, and smart healthcare. In such IoT-based systems, wireless sensor networks (WSNs) play a major role. WSNs consist of autonomous sensor-equipped intelligent devices. These devices work together to sense and gather data required by IoT applications. In WSN-based IoT, power consumption is a challenging issue in extending network lifetime. Due to these issues, IoT applications suffer from power loss, delay, and shorter network lifetime. Existing routing protocols designed to address these issues are lacking in efficient route selection, reliable data transmission, and maximizing packet delivery ratio. In designing energy-efficient routing protocol for WSNs, optimal route selection with minimum energy depletes has become a matter of concern. This research proposes a novel routing scheme for WSN-based IoT. It helps to achieve better application performance by reducing delay and energy depletion and improving the packet delivery ratio. The proposed protocol is named as Link-Quality based Energy-Efficient routing (LQEER) for WSN-based IoT. LQEER is composed of three major steps. The first step is the network setup, where information is distributed among the nodes in the network field. The distance information of first-hop neighbors is calculated and stored in the routing table. The next step is link-quality estimation, where a standard link quality estimator is used to ensure the reliability of links between nodes before transmitting data. The third step is packet routing with energy balancing, where all the information of neighboring nodes stored in routing is then used in planning routes and data forwarding. LQEER is compared with a routing based on tree and geographic (RTG) and energy-efficient optimal multi-path routing (EOMR) protocol. NS2 is used for evaluating the LQEER protocol. Experimental results show that LQEER reduces energy consumption by 30% and 25% compared to EOMR and RTG. LQEER improves packet delivery ratio by 25% and 21% over EOMR and RTG respectively. Moreover, LQEER also minimizes end-to-end delay and gains improved network lifetime as compared to EOMR and RTG protocols.
Thermal aware high throughput routing protocol for wireless body area networks The Wireless Body Area Network (WBAN) is a branch of the Wireless Sensor Network (WSN) that uses biosensors continuously collect data about the human body. These biosensors implanted on the human body to monitor movement and each node in the network act in a group to route the data packet. Wireless routing protocols provide a route to distribute the information by making the route. However, in the routing protocol selecting the next hop node for data transmission consequently affects the packet delivery ratio (PDR) network's performance. Frequent use of the same node for packet transmission and unnecessary information distribution rise the temperature of the node, which causes heated node issue that could damages the human tissue. This research proposed protocol called Thermal Aware High Throughput routing protocol for wireless body area networks (TAHT) that uses multi-hop communication. It manages the network’s route by data distribution initially. It estimates link quality for selecting the next hop from neighbor nodes. Also estimates the temperature to control the temperature rise and suspending the node based on predefined threshold and update the remaining energy status continuously to balance the energy. Moreover, the proposed protocol TAHT is compared with protocols name ERRS and TAEO, the simulation result shows that the proposed protocol TAHT controls the temperature dissipation by 5% as compared with the TAEO protocol while TAHT achieves 10% maximum throughput from TAEO and 20% from ERRS. TAHT increase 20% PDR by route planning as compared to ERRS. TAHT consumes 40% less energy as compared to TAEO and 20% from ERRS that improves network lifetime.
An Improved Text-To-Gesture Generation Model Based On A Hybrid Deep Learning Approach Non-verbal cues play a pivotal role in providing human-like interaction with Artificial Intelligent Machines like Robots and Digital Assistants. The goal to give machines a human-like aptness is under research for years. Previous work to generate gestures is mostly from speech and also those models are subject to limitations like speaker dependency and gesture quality. This research is to target such problems and develop a gesture-generation model that is capable of producing quality gestures against a sequence of text input words independent of any speaker. Altering an existing specific speaker dataset, a new dataset is prepared including words and gestures. Integration of a sequential Long Short Term Memory algorithm in the model has improved the accuracy of the gestures in terms of Percentage of Corrected Key Points. The experiment results and comparison with relevant schemes show the achievement of the proposed model as it has maximized the PCK and minimized the error rate.
Artificial Neural Network Based Computational Framework To Solve SI System Of Covid-19 Non Linear Equation The aim of the present study is to design the artificial neural network framework to solve the Susceptible and infected (SI) model of the COVID-19 non-linear equation. Mathematical model for epidemic diseases are non-linear. The approximation of non-linear system of ODE is a challenging task. To model epidemic diseases, many analytical and numerical models are proposed in literature. In recent past, unsupervised artificial neural network gain much attention to solving ODE in different application. In the proposed method, the ReLU artificial neural network has been used for its effectiveness in training and modeling complex relationships of our proposed scheme. The fitness function of the unsupervised error function is used to determine how well the predictions provided by the ANN align with the actual data or the desired outcomes. With the help of an analytical model and numerical solver (ODE-45) result, the MAE is utilized to evaluate the accuracy and reliability of our suggested scheme.
Depth Based Fog Assisted Data Collection Scheme For Time-Critical IOUT Applications The Internet of Underwater Things (IoUT) has emerged as a game-changer for underwater applications, with acoustic waves as its go-to communication medium. On the surface, radio signals dominate communication between sinks and onshore control centers. The fusion of IoUT with Fog Computing offers a robust platform for dynamic applications, from pipeline management to large-scale emergency responses and underwater infrastructure monitoring. Sink node delays in IoUT are primarily due to limited processing power, especially concerning routing protocols. Furthermore, redundant packet transmission, while forwarding data, not only escalates energy use but also introduces delays. The developed scheme is called Depth-based Fog Assisted Data Collection (DFDC) scheme for time-critical Internet of Underwater Things (IoUT) applications. DFDC leverages fog computing to ease the load on sink nodes, slashing packet delays to onshore control systems. Moreover, it deploys a strategy to curb redundant transmissions, enhancing latency and energy efficiency in the data forwarding process for ordinary underwater sensor nodes. DFDC is compared with a High-Availability Data Collection Scheme based on Multi-AUVs for Underwater Sensor Networks(HAMA) and Data Gathering algorithm for Sensors (DGS). Through extensive simulations and analysis, this research demonstrates that the DFDC protocol outperforms both HAMA and DGS in terms of reducing packet delivering ratio, conserving energy, and minimizing redundant data transmission. These findings underscore the potential of DFDC as a groundbreaking solution for improving underwater communication, promising more efficient and reliable data transmission in underwater scenarios. This study contributes valuable insights that can shape the future of underwater communication protocols.