Interface circuit for piezoresistive pressure sensors
A major problem associated with piezoresistive pressure sensors is their cross-sensitivity to temperature. Moreover, in batch fabrication, minor process variations change the temperature characteristics for individual units. An important economic implication for the success of smart sensors is the use of batch fabrication techniques to bring down the cost of individual units. This research fulfills this need by developing a new temperature compensation technique suitable for batch fabricated sensors for a temperature range of -40°C to 130°C over a pressure range of 0 to 45 psi. Hardware for the implementation of the technique and digitization of sensor output is also developed.
The sensor model is developed from the viewpoint of simulating the sensor I/O characteristics as a function of pressure, temperature, and processing variations. The model includes the effects of temperature, resistor mismatch, and appropriate structural details. The simulation results provide the worst case error band for sensors coming from the same wafer or different wafers. All sensor parameters are functions of temperature and the tracking errors.
The temperature compensation technique is implemented in two parts, using a compensation bridge and a temperature half bridge. The zero pressure offset is reduced below the measurement precision limit for the entire pressure and temperature range. For the sensor output, the technique is very effective for the pressure values below 35psi and provides reasonable results for higher pressures. The possible use of a software approach to implement part of the compensation technique is also discussed.
The hardware for the amplification, temperature compensation, and the digitization of the sensor output has been designed and verified. A dual slope A/D conversion has been identified as a simple and precise conversion technique, suitable and compatible for on-chip integration with the sensor. The bipolar dual slope ADC with a word length of 10-bits and the clock frequency of 50 KHz has been designed and verified.
The temperature compensation technique is suitable for batch fabrication. More over, it does not require compensation of individual units under sensor operating conditions. The resulting interface circuit is simple, requiring modest chip area. It can be implemented using standard IC fabrication techniques.
*Thesis submitted to Michigan State University, East Lansing, U.S.A in December 1991 for Ph.D. degree.
Recommender systems apply machine learning techniques to predict about items. These systems are very effective in filtering large amount of information into more concrete form. Due to their effectiveness, they are now been used extensively in approximately all domains. Medical field is one of the domain where a lot of research is going on regarding recommender system utility. The information related to healthcare, available online, has increased tremendously in last few years. Patients now-a-days are more conscious and look to find answers related to healthcare problems online. This resulted in need of a reliable online doctor recommender system which can recommend physicians best suited to a particular patient..In this paper we propose a hybrid doctor recommender system by combining different recommendation approaches i.e. content base filtering, collaborative filtering and demographic filtering. This research work propose a novel adoptive algorithm which is used to construct a doctor’s ranking function. This ranking function can be used to rank doctors according to patient’s requirement. Ranking function is been used to convert patient’s criteria for doctor’s selection into number base rating. This rating is then used for doctor recommendation. We have evaluated our system utility and results show that our system performance is very effective and quite accurate.
Review-Scrum Introduction of Mda In Agile Methodology
R-Scrum is used for designing large scale software products. R-Scrum is updated form of Scrum with some additional roles (M&E Analyst, Release Manager, and Release Controller). In Review-Scrum the deployment phase is different with Scrum by introducing the additional role of Release Manager. Release controller will manage the deployment on local server similarly deployment on live server has been achieved by Release manager. The proposed research work focuses on stability of R-Scrum, such stability is done by implementation of AMM. Implementation of AMM in R-Scrum is achieved by inclusion of Work Breakdown (WB) in sprint setup meeting and also works for improving the methodology of testing. Education MIS is designed with the help of Review-Scrum and delivers some logical proves for the implementation of WB structure in R-Scrum. Productively implementation of WB in Educational MIS is setting the benchmark for stability of different MIS systems.
There is huge explosion in the number of new databases, applications and documents in the recent past. This results into lot of redundancy and duplication, which leads to high inefficiency in query processing. Most of the users, who need the information, are naive, because they do not have knowledge of internal structures of databases, contents of data sources and query languages. So it is very difficult for them to query and analyze the desired data from autonomous, geographically distributed and heterogeneous data sources. Query expansion is used to answer the users¿ queries to improve performance and effectiveness of queries. We propose a solution for expanding the users¿ queries with the support of ontology so that recall is improved and information loss is minimized while answering users¿ queries. We have developed rules for expanding the semantically meaningful and illustrate with appropriate example. The results show that our rules are better in terms of recall
Decision Support Framework for Architectural Pattern Selection (DSAPS)
A suitable architectural design is first significant step in the process of developing software products. It is worthy of carrying many strategic decisions business strategist will use in the course of process and reflect those decisions in architecture. In this way architectural process becomes a major stakeholder in development and not realizing its significance lead to project failure. Software architectural designs explore the premise of all the major inputs and expose results to the architects for major decisions in coordination with project stakeholders. In short software architectural design process comprehends all the design decisions, functional requirements, scope, and non-functional requirements in software architecture. It is necessary to take into accounts all possible details i.e. requirements before selecting architectural style or patterns. The research study is conducted with a contextual need to shift design process to the early phases of development to support vital design decisions that have a substantial cost consequence on the overall quality of the project. The thesis develops an interactive framework to ease the selection process of architectural patterns in a business domain. The DSAPS framework proposed in this work implies a rapid approach to customize design decisions to SA design process. In the first step DSAPS stereotype and prioritize architectural patterns for a particular architectural style. Further it uses the set of artifacts to generate and assess wide range of architectural patterns than a human could manage by making use of AHP technique. The system has a potential to run autonomously or with the help of expert. Evaluation of DSAPS and wider range assessment during early phases of development points to the fact that the approach has a good prospective to support for informed decision-making leading to better quality of obtained requirements.
Keywords: Software Architectural Engineering, Decision Support System, Planning Systems, AHP technique
Software Resource rationalization by Risk Reduction
Risks are a common phenomenon in software development and have negative impact on the development process. A software process is considered mature, if it can identify, prioritize and mitigate the risk factors before they have become harmful. This research is focused to propose and validate a model that can reduces the risks and improves the resource allocation for software projects. The research study identifies prominent risk factors, project factors and gages the impact of all identified risk factors along with their probability. The association among the risk factors and project factors is established in result of an elaborated study. This research consequently determines that how a model can be developed, implemented and tested to ensure that by applying that model the risks are reduced / eliminated and the resource allocation is improved. This study identifies a list of prioritized risk factors by conducting a detailed literature review followed by the application of quantitative methods to verify the findings. The project factors are identified based on similar exercise.
The software project scales have been established to help in categorizing the scale of the project by conducting a quantitative study. Because of this quantification process, the large-scale projects have been identified to possess a range of values for the project factors, like Time, Cost, Team size and Computational resources. The probabilities and impact of the software risk factors have been identified and the association among the project factors and risk factors have been established and validated by mixing the results of quantitative and qualitative methods. Several major and minor contributions can be identified in this study, the major contributions include: Identification and validation of risk factors and project factors based on the frequency and quantitative analysis, establishment of the project scales, Identification and consolidation of weak and strong association between the project factors and risk factors, and design, implementation and testing of a least assumptive model for risk reduction and resource rationalization based on identified project factors.
The minor contributions include the methodological contributions in the study, the literature review, identification of observatory and participatory project factors, the average wage analysis of the software developers by taking a sample from developing and developed countries, and identification of computational resource’s proportion in the overall budget. The outcome of this research is of special significance to the software engineering literature. As risk reduction, cost estimation and software cost rationalization is an area of prime interest in the software engineering this research plays a vital role in addressing the issues.
The research is beneficial as the outcome ensures that by using the proposed model the risks are reduced and the cost of developing the software is rationalised by decreasing the cost of risk handling and other insignificant allocation. A software model has been proposed and implemented that aims to improve the resources utilization by decreasing the risks in the software development lifecycle. The model’s performance has been verified by running test cases bearing data of diversified nature where the model has performed reasonably well and has not only reduced the risks but also improved the resource allocation, in most cases.
Situational Requirement Engineering Model for Global Software Development
Competencies of requirement engineers to identify situational factors in Global Software Development (GSD) indicate their ability for accurate and adequate identification of situational factors. Currently requirement engineers face competency related challenges to identify accurate and adequate situational factors. Although existing studies focused on situational factors identification, none of them targets requirement engineering (RE), resulting in lack of any situational RE guidelines that restrain the requirement engineer’s competence for accurate and adequate situational factors identification. This study aims to identify the situational factors affecting RE in GSD and to identify the most influential situational factors for RE activities (elicitation, analysis, specification, validation, and management). Besides, it also aims to formulate a situational RE model for GSD and to develop web-based tool of situational RE in GSD. To identify the situational factors, a qualitative technique of systematic literature review was performed that resulted in 22 situational factors, 112 sub-factors categorized in 5 categories. To identify the most influential situational factors for each RE activity a quantitative technique of survey was performed with 14 globally distributed software houses of Malaysia, where 83 respondent’s responses were included in data analysis. Situational factors whose composite mean values were found above 4.00 were considered as most influential situational factors for particular RE activity. For each RE activity out of 22 situational factors, 7 situational factors for requirement elicitation, 8 situational factors for requirement analysis, 6 situational factors for requirement specification, 8 situational factors for requirement validation, and 7 situational factors for requirement management were found most influential. Furthermore, a situational RE model was formulated based on the literature and industry responses. The model was further transformed into a web-based situational RE tool by using ASP.Net. This web-based situational RE tool was evaluated by conducting an experiment to assess participant’s competence for accurate and adequate situational factors identification, where participants identified the situational factors with and without using the web-based situational RE tool. Paired sample t-test was performed on total of 21 participant’s responses. The mean values of accurate situational factors identification with and without using web-based situational RE tool were found 6.76 and 3.19, whereas the mean values of adequate situational factors identification with and without using web-based situational RE tool were found 6.80 and 5.04 respectively. The results showed that the participant’s competency was enhanced by identifying the more accurate and adequate situational factors by using web-based situational RE tool. The participants were also provided with post experiment questionnaire to evaluate web-based situational RE tool’s usability which was found usable. This research has following contributions: an evaluated list of situational factors, most influential situational factors for each RE activity, a situational RE model, and an empirically evaluated web-based situational RE tool for GSD.
Optimization in Technological steps for the fabrication of large area micromirror arrays
Micromirror arrays are a very strong candidate for future energy saving applications. Within this work, the fabrication process for these micro mirror arrays was optimized and some steps for the large area fabrication of micro mirror modules were performed. At first the surface roughness of the insulation layer of SiO2 was investigated. This SiO2 thin layer was deposited on silicon, glass and Polyethylene Naphthalate (PEN) substrates by using PECVD, PVD and IBSD techniques. The surface roughness was measured by Stylus Profilometry and Atomic Force Microscopy (AFM). It was found that the layer which was deposited by IBSD has got the minimum surface roughness value and the layer which was deposited by PECVD process has the highest surface roughness value. During the same investigation, it was found that the surface roughness keeps on increasing as the deposition temperature increases in the PECVD process. A new insulation layer system was proposed to minimize the dielectric breakdown effect in insulation layer for micromirror arrays. The conventional bilayer system was replaced by five-layer system but the total thickness of insulation layer remains the same. It was found that during the actuation of micromirror arrays structure, the dielectric breakdown effect was reduced to approx. 50% as compared to the bilayer system. In a second step the fabrication process of the micromirror arrays were successfully adapted and transferred from glass substrates to the flexible PEN substrates. In the last section, a large module of micromirror arrays was fabricated by electrically interconnecting four 10cm×10cm micromirror modules on a glass pane.
Non Linear Intensity Invariant local image descriptors
An interesting problem in Computer Vision is the construction of local image descriptors. It deals with the description of intensity patterns within image patches. Image patches are local image regions centered at feature points. The description of such image patches helps in establishing correspondences between the feature points of two or more images of the same scene under intensity, scale, rotation, and affine changes. Such correspondences are used in a wide range of applications, such as image matching, image retrieval, object tracking, and object recognition. This thesis presents new methods for the construction of local image descriptors in order to establish feature point correspondences under nonlinear intensity changes. Nonlinear intensity changes occur in multispectral imaging or when a scene is acquired under variable lighting conditions. Background noise and degradation in ancient document images also cause nonlinear intensity changes. Nonlinear intensity changes affect the performance of the state-of-the-art local descriptors, such as Scale Invariant Feature Transform (SIFT) and result in a low matching performance in image-to-image and image-to-database matching tasks. To cope with these problems, the new methods proposed in this thesis use novel image features, which are obtained by combining the strengths of image gradients, Local Binary Patterns, and illumination invariant edge detectors. These features are read from image patches by using the SIFT-like feature histogram schemes to construct five new local descriptors, which are: Local Binary Pattern of Gradients, Local Contrast SIFT, Differential Excitation SIFT, Normalized Gradient SIFT, and Modified Normalized Gradient SIFT. To evaluate the performance of new descriptors, experiments on five different image datasets are performed. The performance of new descriptors are compared with that of SIFT and seven other state-of-the-art local descriptors. In the case of image-to-image matching, ground truth homographies between the pairs of images are used and the number of correct descriptor matches is counted for the performance comparison. In the case of image-to-database matching, a nearest neighbor based descriptor matching strategy is used and the recognition rates for two different tasks are computed. These tasks are Scene Category Recognition (SCR) and Optical Character Recognition (OCR). The experimental results show that the new descriptors obtain on average 0.5% to 12.8% better performance than SIFT in image-to-image matching task. In the case of SCR, they obtain on average 1% to 5% better scene recognition rates than SIFT, whereas in the case of OCR, they demonstrate on average 1.1% to 6.7% better character recognition rates than SIFT.
Decoding of visual information from Human brain using Electronce phologram
Decoding the patterns of human brain activity for different cognitive states is one of the fundamental goals of neuroimaging. Recently, researchers are exploring new multivariate techniques that have proven to be more reliable, more powerful, more flexible and more sensitive than standard univariate analysis. Multivariate techniques are so powerful that these can decode the patterns in Functional Magnetic Resonance Imaging (fMRI) data without selection of voxels, moreover they have the ability to decode the brain activities even with Electroencephalography (EEG) signal which is considered as a weak signal. In this study, simultaneous data for EEG and fMRI is collected to evaluate if EEG can produce comparable results under same conditions i.e. subjects, time and analysis techniques. There is no such study reported which has compared the accuracy of both modalities under same circumstances but a few studies have compared the performance of EEG and fMRI techniques through separate data collection. During the analysis of EEG and fMRI using MVPA, an average accuracy of 64.1% and 65.7% is found for fMRI and EEG respectively. Furthermore, this thesis presents a hybrid algorithm which is a combination of Convolutional neural network (CNN) for feature extraction, likelihood ratio based score fusion for prediction. The CNN model is specially designed with one convolutional and one pooling layer for one dimensional EEG data. The proposed algorithm is applied to three different real time EEG data sets. A comprehensive analysis is done using data of 34 participants and the validation of proposed algorithm is done by comparing results with the current recognized feature extraction and prediction techniques. The results showed that the proposed method predicts the novel data with improved accuracy of 79.9% compared to wavelet transform-SVM which showed an accuracy of 67%. In conclusion, the proposed algorithm has outperformed the current feature extraction and prediction methods.
Automatic Modulation classification using genetic programming
With the popularity of software defined radio and cognitive radio-based technologies in wireless communication, RF devices have to adapt to changing conditions and adjust its transmitting parameters such as transmitting power, operating frequency and modulation scheme. Thus, Automatic Modulation Classification (AMC) becomes an essential feature for such scenarios, where receiver has a little or no knowledge about the transmitter.
This research explores the use of iterative techniques such as Genetic Programming (GP) for classification of digital modulated signals. K-nearest neighbor (KNN) has been used to evaluate ﬁtness of GP individuals during the training phase. Additionally, in the testing phase, KNN has been used for deducing the classiﬁcation performance of the best individual produced by GP. Several modulation schemes are used in this research for classification purpose. Higher order statistics have been used as input features for the GP. Simulation results demonstrate that the proposed method provides better classiﬁcation performance as compared to other well-known state of art techniques
Emperial Investigationof change of secquence of communication medium in DSD w.r.t types of conflicts
Globalization of innovation and markets has dramatically impacted software development. Today, more software projects are run in geographically distributed environments, and global software development is becoming a norm in the software industry. This research deals with the identification of the different types of conflicts which most commonly occur during the global software development in GSD organizations. For this purpose, we have conducted an empirical investigation which will result in a fair evaluation of the research topic. Project explores mechanism which gives a guideline for selection of communication medium sequence based upon the type of conflict. Types of conflicts are the key factor for that mechanism of communication medium sequences. This research provides a vision of the state-of-the-art of change of communication medium sequence w.r.t types of conflicts which will allow us to identify possible new research lines. It deals with the empirical investigation of impact of change of communication medium sequence on conflict resolution. In this report conduction of controlled experiment and its results are discussed in detail, that whether change of communication medium sequence effects the conflict resolution or not. The only conflict discussed in this research is ambiguity. For this experiment five teams are selected comprising of four members of software engineering departments of two universities.
Group Based Power Efficient Gathering Protocol in Wireless Sensor Networks
Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Efficiently Information is gathered in the sensor network is very critical. That is way we are going to present an efficient way to gather information in such scenario. Many researcher present a work in the field of sensor network. Every node in sensor network can sent data packet to the home station. If the data is sent to the home station by node are sensed data then it should deplete its power very quickly and efficiently.
Many protocol work as a cluster based on some input information. LEACH protocol is one of them that present a solution of above problem, where clusters are formed to fuse. LEACH protocol do this before transmit data to the base station. LEACH achieves the desired solution with the help of 8 important modification. And also compared to the direct transmission, measured the situation when nodes are alive or dead. In this paper we are going to propose a modified protocol names as Group Based Power Efficient Gathering Protocol in WSN.
The main working of the proposed protocol is that it work in the form of group. A group based protocol is an improvement over LEACH protocol. In the result section the performance of LEACH and Group Based Power Efficient Gathering Protocol in WSN are displayed. In Our protocol each node communicate with the cluster head. Cluster head are responsible to send data to the base station, thus reducing the amount of energy spent per round. Simulation results show that our protocol performs better than LEACH.
Design & evolvation of power budget for a gidireti oval LWDM possive optical Network
With an increasing demand of bandwidth from enterprises and households, the data rates of broadband access network will be required over 1Gbps for each customer. To solve this problem, Time-Division Multiple Passive Optical Network (TDM-PON) like Gigabit PON (GPON) and Ethernet PON (EPON) are deployed to resolve the bandwidth bottleneck. These technologies, however, still cannot meet the demands of the increasing services such as High Definition TV (HDTV). In the proposed thesis Coarse Wavelength Division Multiplexing-Passive Optical Network (CWDM-PON) has been employed as the most effective technology for enhancing bandwidth at the access side. The report gives a detailed description of the work done in designing the whole setup. The proposed setup has been tested, simulated and analyzed using software named OptiSystem. Complete results with graphs are also included. The results show that the designed setup has the capability to withstand huge number of customers with an acceptable value of BER.
Keywords: Time-Division Multiple Passive Optical Network (TDM-PON), Gigabit PON (GPON), Fiber-To-The-Home (FTTH), High Definition TV (HDTV), Coarse Wavelength Division Multiplexing-Passive Optical Network (CWDM-PON).