Department wise Listing | NUML Online Research Repository
List of Content
Back to Listing
Title Abstract Action(s)
EXPLOITING SEMANTIC KNOWLEDGE FOR IMAGE CAPTIONING USING DEEP LEARNING The technique of generating textual explanations for images is commonly referred to as image captioning. It has attracted a lot of attention recently because it may be used in a variety of fields. There are some challenges in image captioning, one of them is the lack of incorporating semantic knowledge in generating image captions. Semantic knowledge can be helpful in object detection by exploiting relationships among objects and in language semantics. In this study, the issue of image captioning is investigated by combining two efficient models, the vision transformer (ViT) and the generative pre-trained transformer 2 (GPT-2). The ViT uses self-attention techniques that are applied to image patches to capture visual elements and overall context from images. The GPT-2 model complements ViT with extraordinary language production abilities that enable it to produce content that is cohesive and related to the situation. An encoder-decoder-based deep learning model is proposed where the ViT performs the encoder function, extracting meaningful visual representations from images, while the GPT-2 model performs the decoder function, producing descriptive captions based on the retrieved visual features. This method makes it possible to seamlessly combine textual and visual information, producing captions that faithfully reflect the content of the input images. The potential of this combination is demonstrated through empirical analyses, highlighting the advantages of utilizing both language and visual components in the ‘image captioning’ process. My research strengthens multimodal AI systems by bridging the gap between visual and language comprehension. The experiments were performed on the MS COCO dataset and Flicker 30k dataset. The model was validated using various evaluation metrics. Results show an improvement as Bleu-1, Bleu-2, Bleu-3, Bleu-4, Rogue, and Meteor by 10.58, 20.45, 21.07, 34.19, 0.3, and 11.16 respectively. The other evaluation metrics like Meteor improved by 11.16 and the Rogue metric improved by 0.3 on the MS COCO dataset.
Analysis of Scrum based Software Development to Improve Risk Management in Pakistani Software Industry Software evolves continuously to accommodate market volatility, posing danger to the project. Agile approaches have been suggested to handle these continuous changes in software requirements. Although, where there is a considerable amount of academic literature on the process of projects, a very negligible amount of research considered proper process for risk management in scrum projects in Pakistani Software Industry. The process of risk management involves seven processes such as planning, identification, qualitative analysis, quantitative analysis, risk response planning, risk response implementation, and monitoring. While adopting agile, many risks arise so proper mitigation strategies should be established by incorporating all risk management processes to overcome these risks. Existing literature lacks the implementation of proper processes for risk management that could lead the software toward failure. The major reason of failure of software projects is limited application of proper risk management. Agile methods like scrum do not propose particular activities for risk management. Due to this practitioner are not completely aware of these uncertain events. Keeping in mind this weakness, this study tried to provide mitigation strategies for a proper risk management process based on the scrum method. For that purpose, systematic literature review was conducted for identifying the challenges that can arise in agile software development. The practicality of these challenges was found by conducting survey in different software development companies. Based on these challenges mitigation strategies were proposed by conducting interviews from industry practitioners for mitigating these challenges. To validate these proposed mitigation strategies, a focus group methodology is applied. The mitigation strategies provide recommendations to mitigate the identified risk management challenges in scrum development. The proposed mitigation strategies will be helpful in reducing risks as well as in facilitating teams to handle them more easily in agile projects that use the scrum methodology and to enhance scrum project success rate.
Sentiment Analysis of Toxic Comment on Social Media using Deep Learning In the rapidly evolving field of natural language processing, accurately predicting sentiments in text remains a critical challenge. This thesis addresses the problem by developing a novel multi-head model combining transformer-based architectures, DistilBERT and RoBERTa, with Bi-LSTM layers. Leveraging their complementary strengths, the model captures both global context and sequential dependencies in textual data. The research methodology involves extensive data preprocessing, model training, and evaluation using accuracy and F1-scores. Results demonstrate that the multi-head model outperforms traditional approaches, achieving a notable accuracy of 90.02%. This advancement offers significant benefits, including improved sentiment-driven decision-making and valuable insights across various industries, such as social media monitoring, customer feedback analysis, and market research.
PAKISTAN STOCK MARKET PRICE PREDICTION USING MACHINE LEARNING The stock market is a regulated marketplace where companies raise capital by selling shares of stock, or equity to investors. The stock market is the backbone of a country because it is essential for country’s development, corporate governance, capital formation, investment, and economic growth. However, due to various factors such as company performance, financial crises, political instability, and pandemic outbreaks, the stock market is very challenging to predict. This study uses a dataset from different sectors of Pakistan Stock market, carefully processed by adjusting sizes, normalizing, and fixing errors. Initially, Moving Average (MA) and Exponential Moving Average (EMA) are used to identify crisis points in stock market. Afterward, the Stochastic Relative Strength Index (Stoch RSI) is applied to predict the stock market. The novel part comes in the third step, where an advanced transformer model is used for better predictions of stock market prices. The model's performance is thoroughly assessed using standard measures like Root Mean Square Error (RMSE), Mean Squared Error (MSE), and Mean Absolute Error (MAE). The Average evaluation scores for all indices of Pakistan stock Market Sectors are RMSE=0.052865, MSE=0.002866, and MAE=0.071720. The results now improve understanding of the Pakistan stock market and also highlight the effectiveness of transformer models in predicting stock prices by tunning different parameters and hyperparameters. The transformer layers used in the proposed studies for extracting the most effective features which outperforms as compared to the techniques used in previous studies for Pakistan stock market price predictions.
Analysis of Requirements Prioritization in Distributed Scrum for Reducing Software Failures Requirement Prioritization is an essential part of the software development life cycle. The success and failure of software heavily depend on requirement prioritization. Scrum gaining popularity in software development to get mutual benefits of scrum and distributed team environment. All the stakeholders in distributed Scrum are usually distributed by time and geography, so prioritization of requirements becomes challenging. Therefore, in this comprehensive research, requirement prioritization is navigated in the context of distributed Scrum to reduce software failure. The study started by carefully identifying the problems from a thorough literature review with practical analysis. Then validating the identified challenges found by the literature review with the help of a survey. Reviewers were distributed Scrum practitioners. Later on, interviews were conducted to find the possible solutions to the challenges. Their extensive experience not only validates the validity of the challenges that have been identified but also provides our study with a more profound comprehension of the practical implications. Building on this foundation, a set of guidelines were proposed to address the challenges of requirement prioritization in distributed scrum to reduce software failure. These solutions, which provide an organized framework that practitioners and organizations can easily use, are the result of the collective experience of the agile community. The proposed solution was rigorously validated by the Focus Group to strengthen its applicability and practicality. This collaborative refinement ensures that our guidelines align seamlessly with the requirement prioritization challenges faced by distributed Scrum teams. The research's output is a well-balanced combination of theoretical understanding and real-world experience. This comprehensive method not only adds to the body of knowledge in the field of requirement prioritization research, but it also offers organizations and practitioners a useful road map for negotiating the challenges associated with distributed scrum and software failure in requirement prioritization.
Design of tunable Fabry Perot filter for spectroscopic applications Many spectroscopy applications demand tiny, durable, and portable spectrometers that are far less expensive than present solutions. As a result, micro spectrometer technology is fast evolving, and numerous research organizations are working on it. Tunable Fabry-Pérot filters (TFPF) outperform other types of devices in terms of miniaturization and optical throughput. Spectroscopy is the analysis of the relationship between matter and electromagnetic radiation as a function of the wavelength or frequency of the radiation. The optical Nano spectrometer is made up of a static FP filter array with cavities and a matched detector array, with each filter producing its own spectral filter line dependent on cavity thickness. In a FP interferometer filter, wavelength selectivity is achieved via a multiple-beam interference approach. The filter is often made up of two highly reflecting mirrors that form a resonating cavity that causes multiple-beam interference, with a single input and output port. Tunable FP filters that are tuned on all cavity spaces, giving an advantage in the size and space department of the filter. Instead of utilizing an array of filters, single filter is used. To accomplish this, analyses several materials from the COMSOL Multiphysics library to examine thin film structures, and then optimize adjusted Fabry-Pérot filters (FPF) using the best material available. The FPF core structure features three upper Distributed Bragg Reflector (DBR) mirror layers connected to three lower DBR layers. The cavity layer, made from PZT, is nestled between these layers. The upper layers are composed of SiO2 and a central layer from TiO2, while the lower layer is encapsulated by TiO2. This intricate geometric configuration is crucial for optimal performance in spectroscopic applications. The research shows fixed FP filters, with their fixed spacing between DBR layers, transmit a specific wavelength of light. They are not tunable and can only operate at the designed wavelength. Tunable filters can adjust the spacing between DBR layers, allowing them to select different wavelengths. This makes them versatile and suitable for various applications. Tunable filters are complex and require precise control systems and can have variable spectral resolution depending on the selected wavelength and adjustment mechanism. We delve into the FPF filter's response when exposed to distinct voltage Range settings. Each voltage setting corresponds to a specific mirror separation distance, which, in turn, determines the filter's transmission characteristics. At its maximum iv tuning capacity, the FPF exhibits a remarkable shift in interference fringes. This allows for the broadest range of wavelengths to either pass through or be blocked. Comprehending the voltage-dependent tunability of the TFPF, spanning from 1V to 40V, is a pivotal aspect of its adaptability and usefulness in various optical applications. Researchers and engineers, through this understanding, can harness the device's capabilities to precisely control wavelengths, thereby driving advancements in optical communication, spectroscopy, and various other optical technologies.
MACHINE LEARNING BASED FRAMEWORK FOR HEART DISEASE DETECTION Cardio Vascular Diseases (CVDs), or heart diseases are one of the top-ranking causes of death worldwide. About 1 in every 4 deaths are related to heart diseases, which are broadly classified as various types of abnormal heart conditions. However, diagnosis of CVDs is a time-consuming process in which data obtained from various clinical tests are manually analyzed. Therefore, new approaches for automating the detection of such irregularities in human heart conditions should be developed to provide medical practitioners with faster analysis via reducing the time of obtaining a diagnosis and enhancing results. Electronic Health Records are often utilized to discover useful data patterns that help improve the prediction of machine learning algorithms. Specifically, Machine Learning contributes significantly to solving issues like predictions in various domains, such as healthcare. Considering the abundance of available clinical data, there is a need to leverage such information for the betterment of humankind. In this work, a Stacking model is proposed for heart disease prediction based on the stacking of various classifiers in two levels (Base level and Meta level). Various heterogeneous learners are combined to produce the strong model outcome. The model obtained 98.4% accuracy in prediction with a precision score of 94.56%, recall of 95.6%, and F1-score of 95.89%. The performance of the model was evaluated using various metrics, including accuracy, precision, recall, F1-scores values.