Publication
Browse
Browsing Publication by Title
Now showing 1 - 20 of 401
Results Per Page
Sort Options
- Publication3D Object Detection and Tracking Methods using Deep Learning for Computer Vision Applications(2021)
;Shreyas E ;Sheth M.H3D multi-object detection and tracking is an essential constituent for many applications in today's world. Object detection is a technology related to computer vision and image processing that allows us to detect instances of certain classes. There are numerous applications like robotics, autonomous driving and augmented reality. A bounding box often defines the region of interest and then is used to classify into respective categories. Due to identical appearance and shape of various objects and the interference of lighting and shielding, object detection has always been a challenging problem in computer vision. Conventional 2D object detection yields four degrees of freedom axis-aligned bounding boxes with centre (x, y) and 2D size (w, h), the 3D bounding boxes generally have 6 Degrees of freedom: 3D physical size (w, h, l), 3D centre location (x, y, z). 2D object detection and tracking methods do not provide depth information to perform essential tasks in various computer vision applications. One among them is the Autonomous driving. 3D object detection includes depth information that provides more information on the structure of detected object. More information is required to make decisions accurately in different fields where 3D object detection and tracking can be applied. In this paper various 3D object detection and tracking methods are elaborated for various computer vision applications, this includes various fields such as robotics, driving, space field and also in the military. � 2021 IEEE.Scopus© Citations 17 - Publication5thInternational Conference on CSITSS-2021(2021)
;Kumar P R ;Dr ;Kumar P RDr.[No abstract available] - PublicationA collaborative filtering recommendation engine in a distributed environment(2014)
;Ghuli P ;Ghosh AThe tremendous increase in information available over the Internet has created a challenge in searching of useful information, therefore intelligent approaches are needed to provide users to efficiently locate and retrieve information from the Web. In recent times recommender systems, recommend everything from movies, books, music, restaurant, news to jokes. Collaborative filtering (CF) algorithms are one of the most successful recommendation techniques which present information on items and products that are according to user's interest. There are two methods in CF, user-based CF and item-based CF. Former finds a certain user's interests by finding other users who have similar interests whereas item based CF looks into a set of items rated by all users and computes how similar they are to the target item under recommendation. This paper aims to develop a model by splitting the costly computations in CF algorithms into three Map-Reduce phases. Further, each of these phases can be executed independently on different nodes in parallel. To compute the similarity, the Pearson correlation algorithm is used, which measures the how two items linearly relate to each other, giving a value between -1 and +1 inclusive. In addition, this paper compares the implementation of item based and user based CF algorithm on map-reduce framework. Experimental results showed that the running time of the algorithm improves by approximately 30% with every addition of a node, into a Hadoop cluster. However, item-based CF showed better scalability than user-based CF. � 2014 IEEE.Scopus© Citations 14 - PublicationA Comparative Study of Spark Schedulers' Performance(2019)
;Raju A ;Ramanathan RBig data applications have become an integral part of many intelligent systems, enabling better business decision making, by extracting useful information from historical data. This involves a large amount of computational tasks and algorithms. In such scenarios, the time taken by the processing become an important concern, and needs to be optimized. Apache Spark is a high speed big data analytics engine, best known for its fast computation speeds, due to the use of in-memory data structures and its efficient schedulers. Further optimization to enhance the speed of Spark jobs will be valuable. In this paper, the efficiencies of the four standard schedulers of Apache Spark i.e., Standalone Scheduler, Mesos, YARN(Yet Another Resource Negotiator) and the recently introduced Kubernetes Scheduler are compared. The speeds of these schedulers are compared for jobs of different sizes, and the best alternative is identified. � 2019 IEEE.Scopus© Citations 2 - PublicationA Comparative Study to Determine a Suitable Legal Knowledge Representation Format(2018)
;Shelar ALegal Text is typically conveyed in natural language. It is not suitable to be processed by computer. The representation of complex knowledge in legal text is a challenging activity. Researchers have already proposed different approaches in legal domain on knowledge representation. Legal practice is going through changes over past few decades like semi or automated decision support systems. Modeling and reasoning in e.g. argumentation has increasingly become topic of interest in AI and legal community itself. CEN Metalex, Akoma Ntoso, Legal XML are few technologies which have shown promises to cater requirements of complex legal knowledge representation. In this paper we explore the potential of adapting Oasis Legal Doc ML and Legal Rule ML to accommodate them in context of Indian legal text specifically with example of Consumer Protection Act, 1986. � 2018 IEEE.Scopus© Citations 2 - PublicationA comprehensive review of privacy preserving data publishing (PPDP) algorithms for multiple sensitive attributes (MSA)(2023)
;Gadad VPrivacy preserving data publishing (PPDP) provides a suite of anonymization algorithms and tools that aim to balance the privacy of sensitive attributes and utility of the published data. In this domain, extensive work has been carried out to preserve the privacy of single sensitive attributes. Since most of the data obtained from any domain includes multiple sensitive attributes (MSAs), there is a greater need to preserve it. The data sets with multiple sensitive attributes allow one to perform effective data analysis, research, and predictions. Hence, it is important to investigate privacy preserving algorithms for multiple sensitive attributes, which leads to higher utilization of the data. This chapter presents the effectiveness and comparative analysis of PPDP algorithms for MSAs. Specifically, the chapter focuses on privacy and utility goals and illustrates implications of the overall study, which promotes the development of effective privacy preservation techniques for MSAs. � 2023, IGI Global. - PublicationA Detail Survey on QUIC and its Impact on Network Data Transmission(2022)
;Nayak G.P.N ;Dey N ;Neha N ;Hariprasad M; ; Akram M.As the number of Internet users progressively increase, latency and speed have become significant concerns among netizens. To meet their demands, Google developed a novel network protocol named QUIC (Quick UDP Internet Connection 1) which is built on UDP. Recently, IETF (Internet Engineering Task Force) has been working towards standardising QUIC. Applications like YouTube, Gmail, Chrome, Instagram and Meta have already adopted QUIC as their default network protocol. Through this paper, a critical comparative study between the conventionally used TCP/IP and the now emerging QUIC in terms of connection establishment, speed, reliability and security has been conducted. The paper explains the header structure of TCP and also details the packet structure along with the method of encryption adopted in QUIC. This paper also presents a comprehensive analysis of the advantages and vulnerabilities of QUIC. Vulnerabilities along with their proposed solutions have been tabulated. QUIC has a lot of advantages, and with more research, it has the potential of becoming the future of network protocols. � 2022 IEEE.Scopus© Citations 5 - PublicationA detailed Analysis of Lightweight Cryptographic techniques on Internet-of-Things(2021)
;Srinath S; Shahabadkar R.The use of internet-of-things technology has risen due to the many advantages it offers. The IoT devices network collects and transmits data for processing and performs necessary actions after processing the data. The IoT devices do not have many security mechanisms due to limited functionalities, which can cause the data to be hacked while transmitting it to the server. To provide security to the data, cryptographic techniques can be used. The IoT devices cannot support traditional cryptographic algorithms as they cannot be implemented with the limited resources they possess. Therefore, lightweight cryptographic algorithms have been introduced that are designed to adhere to the restrictions of IoT devices. This paper conducts a study of various works of literature which implement lightweight cryptographic algorithms for heterogeneous IoT devices to provide security to the data being transmitted by constrained IoT devices. � 2021 IEEE.Scopus© Citations 1 - PublicationA flexible and reconfigurable controller design for buck converters(2014)
;Dutta S ;Gadgil S ;Agarwal SThere is a need for more flexible and reconfigurable dc-dc converters. This has pushed designers to seek for alternative ways to control the PWM modulated switch inside the dc-dc converter. This paper presents the results of design and implementation of a fully customizable Arduino based controller for a SMPS (buck converter). The controller presented gives the user a high degree of autonomy and permits the variation of several output parameters such as the magnitude of regulated voltage, the percentage of voltage regulation, the switching frequency and therefore the amount of ripple in the output by simple modifications of the code, eliminating the need for hardware changes. The flexibility of the controller has been illustrated by reconfiguring it to meet the requirements of different design specifications. The results obtained have been analysed, the possible advantages of such a reconfigurable buck converter over other existing solutions and its applications have been discussed. � 2014 IEEE.Scopus© Citations 1 - PublicationA framework for named entity recognition of clinical data(2020)
;Ravikumar JWith emergence of technologies like big data, the healthcare services are also being explored to apply this technology and reap benefits. Big Data analytics can be implemented as a part of e-health which involves the extrapolation of actionable insights from sources like health knowledge base and health information systems. Present day medical data creates a lot of information consistently. At present, Hospital Information System is a quickly developing innovation. This data is a major asset for getting data from gathering of gigantic measures of surgical information by forcing a few questions and watchwords. Be that as it may, there is issue of getting data precisely what the client need, because Hospital Information System contains more than one archive identified with a specific thing, individual or episode and so on. Information extraction is one of information mining systems used to concentrate models portraying essential information classes. The proposed work will work for the most part concentrating on accomplishing great execution in Medical Domain. Fundamentally this had two primary purposes one was separating significant information from patient content record and second one labelling name substance, for example, individual, association, area, malady name and symptoms. Improve survival rates and tweak care conventions and review inquiries to better deal with any interminable consideration populace. Lower costs by decreasing pointless hospitalizations. Abbreviate length of stay when confirmation is fundamental. Copyright � 2020 Institute of Advanced Engineering and Science. All rights reserved.Scopus© Citations 1 - PublicationA framework for secure live migration of virtual machines(2013)
; ; Server virtualization is an emerging technology that provides efficient resource utilization and cost-saving benefits. It consolidates many physical servers into a single physical server saving the hardware resources, physical space, power-consumption, air conditioning capacity and man power to manage the servers. Thus virtualization assists 'Green Technology'. Live migration is an essential feature of virtualization that allows transition of a running virtual machine from one system to another without halting the virtual machine. Live migration extends the list of benefits server virtualization provides. Almost all virtualization softwares now include support for live migration of virtual machine. Live migration is in its infant stage where security of live migration is yet to be analyzed. The usages of live migration and security exploits over it have both increased over time. The security concern of live migration is a major factor for its adoption by the IT industry. In this paper we discuss the attack model on the virtualization system and design and implement a security framework for secure live migration of virtual machines. The framework is an integrated security solution that addresses role based access policy, network intrusion, firewall protection and encryption for secure live migration process. � 2013 IEEE.Scopus© Citations 27 - PublicationA Greedy Approach to Hide Sensitive Frequent Itemsets with Reduced Side EffectsFrequent itemsets mining discovers associations present among items from a large database. However, due to privacy concerns some sensitive frequent itemsets have to be hidden from the database before delivering it to the data miner. In this paper, we propose a greedy approach which provides an optimal solution for hiding frequent itemsets that are considered sensitive. The hiding process maximizes the utility of the modified database by introducing least possible amount of side effects. The algorithm employs a weighing scheme which computes transaction weight that allows it to select at each stage of iteration candidate transactions, based on side effects measurement. We investigated the effectiveness of proposed algorithm by comparing it with other heuristic algorithm using parameters such as number of sensitive frequent itemsets, length of sensitive frequent itemsets and minimum support on a number of datasets which are publicly available through the Frequent Itemset Mining (FIMI) repository. The experiment results demonstrated that our approach protects more non-sensitive frequent itemsets from being over-hidden than those produced by heuristic approach. � Springer Nature Switzerland AG 2020.
- PublicationA Hybrid Framework for Expediting Emergency Vehicle Movement on Indian Roads(2020)
;Raman A ;Kaushik S ;Rao K.V.S.RUnhindered and smooth movement of emergency vehicles within a city is a crucial aspect of any intelligent transport system. It is common to observe emergency vehicles such as ambulances and fire engines obstructed by traffic snarls on Indian roads, especially in the proximity of busy intersections. Existing literature primarily advocates the deployment of RFID technology to terminate the round-robin sequence of the signal system and switch the signal to green in the required direction. However, this technology has proven to be susceptible to electromagnetic interferences and also the economic feasibility is questionable. This paper proposes a model that employs real time image processing and object detection using a convolutional neural network (CNN) architecture called SSD Mobilenet. Unlike a few other architectures, SSD Mobilenet requires very limited computation, hence enabling swift detection. Furthermore, an acoustic signal (sound) processing (pitch detection) algorithm is employed to detect the sirens of emergency vehicles to nullify the potential false positives (e.g. an ambulance in a non-emergency scenario) that creep into object detection using image processing. Both algorithms work in unison, bolstering the accuracy of detection. Upon detection, the signal instantly switches to green, facilitating the expedited movement of emergency vehicles, even in high traffic conditions. � 2020 IEEE.Scopus© Citations 13 - PublicationA Machine Learning Approach to Water Leak Localization(2019)
;Shravani D ;Prajwal Y.R; ; ; Ahmad S.F.A smart water management system is proposed in this paper to identify leakages and predict the location of leakages in pipelines. The system determines leakages by utilizing the flow rates of water in pipelines and predicts the location of the leakages by applying machine learning (ML) techniques. To predict the location of the leakages in the pipeline, different ML approaches have been developed and tested. A comparison of these models is performed to obtain the best model for location prediction. A prototype has been developed in STAR-CCM+, a Computational Fluid Dynamics (CFD) software, to test the proposed system. The results show that amongst the machine learning based location prediction models, the Multi-Layer Perceptron (MLP) performs the best with an accuracy of 94.47% and an F1 score of 0.95. � 2019 IEEE.Scopus© Citations 12 - PublicationA MapReduce framework to implement enhanced K-means algorithm(2016)
; Purohit B.V.Data clustering forms a major part of an important aspect of big data analytics. Data Clustering helps to categorize the data, which further leads to recognize hidden patterns. K-means is one such clustering algorithm which is well known for its simple computation and also the capability of being executed in parallel. Big data analytics requires distributed computing which can be achieved using MapReduce technique. In this paper, enhanced K-means algorithm has been implemented using MapReduce technique which comes with Hadoop platform. The enhanced K-means algorithm is efficient compared to traditional K-means algorithm as it selects the initial centroids of cluster by averaging the data points, rather than random selection of centroids for initial computations as being done in traditional K-means algorithm. The enhanced K-means algorithm achieves better accuracy in cluster formation than traditional K-means. � 2015 IEEE.Scopus© Citations 6 - PublicationA new privacy preserving measure: p-sensitive, t-closeness(2013)
; ;Srinivasan G.NSukanya K.Preserving a sensitive data has become a great challenge in the area of research under data privacy. There are popular approaches such as k-anonymity, t-closeness [1] and l-diversity which are effective measures for preserving privacy. These techniques lead to solving many of the privacy issues. But all these measures suffer from one or the other types of attacks. To minimize these attacks, a new measure called p-sensitive, t-closeness is introduced. This measure will preserve the sensitive data by distributing different values of sensitive attribute according to t-closeness approach by introducing p-sensitivity, by minimizing attacks and improving the efficiency and utility of the data. This technique is termed as p-sensitive, t-closeness which satisfy p-sensitivity and t-closeness for a table by relaxing the threshold value t, so that; it will satisfy the p sensitivity to overcome many of limitations of previous approaches. � 2013 Springer.Scopus© Citations 4 - PublicationA novel approach to implement a shop bot on distributed web crawler(2014)
;Ghuli PShopping Agent is a kind of Web application software that, when queried by the customer, provides him/her with the consolidated list of the information about all the retail products relating to a query from various e-commerce sites and resources. This helps customers to decide on the best site that provides nearest, cheapest and most reliable product that they desire to buy. This paper aims to develop a distributed crawler to help on-line shoppers to compare the prices of the requested products from different vendors and get the best deal at one place. The crawling usually consumes large set of computer resources to process the vast amount of data in fat e-commerce servers in a real world scenario. So the alternative way is to use map-reduce paradigm to process large amount of data by forming Hadoop cluster of cheap commodity hardware. Therefore, this paper describes implementation of a shopping agent on a distributed web crawler using map-Reduce paradigm to crawl the web pages. � 2014 IEEE.Scopus© Citations 1 - PublicationA novel privacy preserving model to enhance data privacy(2015)
; Srinivasan G.N.Organizations and government agencies are publishing a huge data, which belongs to many individuals for analysis purpose. The data published will be utilized by researchers to perform mining or analytics. At the same time, the data published should retain the private information of individual. There are many techniques applied on data to preserve private information of individual. Popular techniques used for privacy preservation under data publishing are k-Anonymity, l-diversity, and t-closeness. Each technique is subjected to its own limitations and vulnerable to privacy attacks. In this paper, we are proposing a new privacy preserving technique, which is not vulnerable to many privacy attacks existing in the literature. Our results show that, the technique overcomes privacy attacks existing in the literature and preserves privacy of individual. The proposed technique unifies the popular privacy preserving techniques such as p-sensitive, k-anonymity and t-closeness. P-sensitivity concept included in the technique identifies different levels of sensitive data and closeness concept distributes these sensitive data values though out the table in such a way that, data will be anonymized and it will be difficult for any intruder to know individual information. The results are tested using queries to assure that, the technique is not vulnerable to privacy attacks described in the paper. � Research India Publications. - PublicationA novel utility metric to measure information loss for generalization and suppression techniques in Privacy Preserving Data publishing(2019)
;Gadad VPrivacy has become a prime importance in this digital era. Personally Identifiable Information (PII) gets collected in various firms such as educational institutions, government organizations, hospitals etc? The collected data is often published and utilized for the purpose of analysis, research, decision making, advertisement or for the purpose of business. It is the duty of the data curator to store and to publish the data safely. A person who is having accessibility to the data that is published must not be able to identify or learn any new personal or sensitive information of an individual. Statistical Disclosure Control(SDC) is a suite of anonymization techniques. The aim of SDC is post processing the data containing sensitive or personal information and effectively protect the privacy of the participating data subject. However these techniques leads to 'information loss', i.e., any analysis or research carried on such a data might not give a appropriate result. This paper aims to discuss various metrics available to assess the information loss in the processed data and an attempt has been made to propose our technique for measuring the loss. � 2019 IEEE.Scopus© Citations 2 - PublicationA Relative Study on Half-Wavelength and Stepped Impedance Resonator based Microstrip Band-Pass Filter(2022)
;Bharathi R ;Sumathi M.SIn this paper an attempt is made to compare the electrical performance of two microstrip band-pass filters (BPF) designed using half-wavelength (?/2) resonator and stepped impedance resonator (SIR). Coupled resonator-based design methodology is adopted to design the two BPFs. The BPFs have a fractional bandwidth of around 1.9 % at 4.195 GHz. The comparative study from simulated results shows that the BPF based on ?/2 resonator provides better insertion loss than the one based on SIR. On the other hand, SIR based BPF provides better spurious signal rejection and in-addition to miniaturization. � 2022 IEEE.