Foreword |
|
xiv | |
Preface |
|
xv | |
Acknowledgment |
|
xxi | |
Section 1 Healthcare Settings and Security |
|
|
Chapter 1 Action Rules Mining in Hoarseness Disease |
|
|
1 | (6) |
|
|
Action rule is an implication rule that shows the expected change in a decision value of an object as a result of changes made to some of its conditional values. An example of an action rule is 'patients are expected to control their health regularly if they receive an information about free medical tests once a year'. In this case, the decision value is the health status, and the condition value is whether the information is sent to the patient. Because of some complex medical problems this paper discusses a strategy which generates action rules to using new knowledge base consisting of classification rules. As one of the testing domains for our research, we take new system for gathering and processing clinical data on patients with throat disorders, and mining action rules will suggest in simply way how to construct the decision support module for easier given diagnosis for patients. |
|
|
|
Chapter 2 Secure Storage and Transmission of Healthcare Records |
|
|
7 | (28) |
|
|
|
Telemedicine has become a common method for transmission of medical images and patient data across long distances. With the growth of computer networks and the latest advances in digital technologies, large amount of digital data gets exchanged over various types of insecure networks - wired or wireless. Modern Healthcare Management Systems need to change to accommodate these new advances. There is an urgent need to protect the confidentiality of health care records that are stored in common databases and transmitted over public insecure channels. This chapter outlines DNA sequence based cryptography which is easy to implement and is robust against cryptanalytic attack as there is insignificant correlation between the original record and the encrypted image for the secure storage and transmission of health records. |
|
|
|
Chapter 3 Fast Medical Image Segmentation Using Energy-Based Method |
|
|
35 | (26) |
|
|
|
Medical applications became a boon to the healthcare industry. It needs correct and fast segmentation associated with medical images for correct diagnosis. This assures high quality segmentation of medical images victimization. The Level Set Method (LSM) is a capable technique, however the quick process using correct segments remains difficult. The region based models like Active Contours, Globally Optimal Geodesic Active Contours (GOGAC) performs inadequately for intensity irregularity images. During this cardstock, we have a new tendency to propose an improved region based level set model motivated by the geodesic active contour models as well as the Mumford-Shah model. So that you can eliminate the re-initialization process of ancient level set model and removes the will need of computationally high priced re-initialization. Compared using ancient models, our model are sturdier against images using weak edge and intensity irregularity. |
|
|
|
Chapter 4 Towards Parameterized Shared Key for AVK Approach |
|
|
61 | (18) |
|
|
|
"Key" plays a vital role in every symmetric key cryptosystem. The obvious way of enhancing security of any cryptosystem is to keep the key as large as possible. But it may not be suitable for low power devices since higher computation will be done for longer keys and that will increase the power requirement which decreases the device's performance. In order to resolve the former specified problem an alternative approach can be used in which the length of key is fixed and its value varies in every session. This is Time Variant Key approach or Automatic Variable Key (AVK) approach. The Security of AVK based cryptosystem is enhanced by exchanging some parameters instead of keys between the communicating parties, then these parameters will be used to generate required keys at the receiver end. This chapter presents implementation of the above specified Mechanism. A model has been demonstrated with parameterized scheme and issues in AVK approach. Further, it has been analyzed from different users' perspectives. This chapter also highlights the benefits of AVK model to ensure two levels of security with characterization of methods for AVK and Estimation of key computation based on parameters only. The characteristic components of recent styles of key design with consideration of key size, life time of key and breaking threshold has also been pointed out. These characteristics are essential in the design of efficient symmetric key cryptosystem. The novel approach of AVK based cryptosystem is suitable for low power devices and useful for exchanging very large objects or files. This scheme has been demonstrated with Fibonacci-Q matrix and sparse matrix based diffused key information exchange procedures. These models have been further tested from perspective of hackers and cryptanalyst, to exploit any weakness with fixed size dynamic keys. |
|
|
|
Chapter 5 Innovative Approach for Improving Intrusion Detection Using Genetic Algorithm with Layered Approach |
|
|
79 | (27) |
|
|
The detection portion of Intrusion Detection System is the most complicated. The IDS goal is to make the network more secure, and the prevention portion of the IDS must accomplish that effort. After malicious or unwanted traffic is identified, using prevention techniques can stop it. When an IDS is placed in an inline configuration, all traffic must travel through an IDS sensor. In this paper the reduced the features and perform layered architecture for identify various attack (DoS, R2L, U2R, Probe) and show accuracy using SVM with genetic approach. |
|
|
Section 2 Knowledge Visualization and Big Data |
|
|
Chapter 6 Knowledge Extraction from Domain-Specific Documents |
|
|
106 | (18) |
|
|
|
|
Knowledge-based systems have become widespread in modern years. Knowledge-base developers need to be able to share and reuse knowledge bases that they build. As a result, interoperability among different knowledge-representation systems is essential. Domain ontology seeks to reduce conceptual and terminological confusion among users who need to share various kind of information. This paper shows how these structures make it possible to bridge the gap between standard objects and Knowledge-based Systems. |
|
|
|
Chapter 7 Semi-Automatic Ontology Design for Educational Purposes |
|
|
124 | (19) |
|
|
|
|
In this paper, we present a (semi) automatic framework that aims to produce a domain concept from text and to derive domain ontology from this concept. This paper details the steps that transform textual resources (and particularly textual learning objects) into a domain concept and explains how this abstract structure is transformed into more formal domain ontology. This methodology targets particularly the educational field because of the need of such structures (Ontologies and Knowledge Management). The paper also shows how these structures make it possible to bridge the gap between core concepts and Formal ontology. |
|
|
|
Chapter 8 Improving Multimodality Image Fusion through Integrate AFL and Wavelet Transform |
|
|
143 | (15) |
|
|
|
Image fusion based on different wavelet transform is the most commonly used image fusion method, which fuses the source pictures data in wavelet space as per some fusion rules. But, because of the uncertainties of the source images contributions to the fused image, to design a good fusion rule to incorporate however much data as could reasonably be expected into the fused picture turns into the most vital issue. On the other hand, adaptive fuzzy logic is the ideal approach to determine uncertain issues, yet it has not been utilized as a part of the outline of fusion rule. A new fusion technique based on wavelet transform and adaptive fuzzy logic is introduced in this chapter. After doing wavelet transform to source images, it computes the weight of each source images coefficients through adaptive fuzzy logic and then fuses the coefficients through weighted averaging with the processed weights to acquire a combined picture: Mutual Information, Peak Signal to Noise Ratio, and Mean Square Error as criterion. |
|
|
|
Chapter 9 Big Data: Techniques, Tools, and Technologies — NoSQL Database |
|
|
158 | (23) |
|
|
|
With every passing day, data generation is increasing exponentially, its volume, variety, velocity are making it quite challenging to analyze, interpret, visualize for gaining the greater insights from the available data. Billions of networked sensors are being embedded in devices such as smart phones, automobiles, social media sites, laptop, PC's and industrial machines etc. that operates, generate and communicate data. Thus, the data obtained from various resources exists in structured, semi-structured and unstructured form. The traditional database system is not suitable to handle these data formats. Therefore, new tools and techniques are developed to work with these data. NoSQL is one of them. Currently, many NoSQL database are available in the market, each one of them specially designed to solve specific type of data handling problems, most of the NoSQL databases are developed with special attention to problem of business organizations and enterprises. The chapter focuses various aspects of NoSQL as tool for handling the big data. |
|
|
Section 3 Data Mining: Utilization and Application |
|
|
Chapter 10 Bombay Stock Exchange of India: Patterns and Trends Prediction Using Data Mining Techniques |
|
|
181 | (32) |
|
|
|
|
Stock market nature is considered to be dynamic and susceptible to quick changes because it depends on various factors like share price, fundamental variables like P/E ratio, dividend yield etc. election results, rumors etc. Now a day's prediction is an important process which determines the future worth of a company. The successful prediction brings motivation and awareness in stock community as well as economic growth of the country. In past various theories and methods like Efficient Market Hypothesis (EMH), Random Walk Theory, fundamental and technical analyses have been proposed. These methods or combination of methods have not got as much success even yet because these methods are very complex and time consuming and performed well on short data. These days stock market users mostly rely on intelligent trading system which would be help them to predict share prices based on various situations and conditions. Data mining is a broad area and also supports various business intelligence techniques. It has mastery to raise various financial issues like buying/selling security, bond analysis, contract analyses etc. in this study various prediction techniques like linear regression, multiple regression, association rule mining, clustering, neural network have been proposed and their significant performances will be compared by Bombay Stock Exchange (BSE) data. |
|
|
|
Chapter 11 Profit Pattern Mining Using Soft Computing for Decision Making: Pattern Mining Using Vague Set and Genetic Algorithm |
|
|
213 | (27) |
|
|
|
|
Problem of decision making is a crucial task in every business. Profit Pattern Mining hit the target by minimizes the gap between statistical based pattern generation and value base decision making. But this job is found very difficult when it depends on the large, imprecise and vague environment, which is frequent in recent years. The concept of soft computing with data mining is novel way to address this difficulty. The general approaches to association rule mining focus on inducting rule by using correlation among data and finding frequent occurring patterns. The major technique uses support and confidence measures for generating rules which is not adequate nowadays as a measure of interest, since the data have become more multifaceted these days, it's a necessary to find solution that deals with such problems and uses some new measures like profit, significance etc. In this chapter, authors apply concept of pattern mining with vague set theory, Genetic algorithm theory and related properties to the commercial management to deal with business decision making problem. |
|
|
|
Chapter 12 Effect of Odia and Tamil Music on the ANS and the Conduction Pathway of Heart of Odia Volunteers |
|
|
240 | (24) |
|
|
|
|
|
|
|
The current study delineates the effect of Odia and Tamil music on the Autonomic Nervous System (ANS) and cardiac conduction pathway of Odia volunteers. The analysis of the ECG signals using Analysis of Variance (ANOVA) showed that the features obtained from the HRV domain, time-domain and wavelet transform domain were statistically insignificant. But non-linear classifiers like Classification and Regression Tree (CART), Boosted Tree (BT) and Random Forest (RF) indicated the presence of important features. A classification efficiency of more than 85% was achieved when the important features, obtained from the non-linear classifiers, were used. The results suggested that there is an increase in the parasympathetic activity when music is heard in the mother tongue. If a person is made to listen to music in the language with which he is not conversant, an increase in the sympathetic activity is observed. It is also expected that there might be a difference in the cardiac conduction pathway. |
|
|
|
Chapter 13 Document Clustering: A Summarized Survey |
|
|
264 | (18) |
|
|
|
As we know use of Internet flourishes with its full velocity and in all dimensions. Enormous availability of Text documents in digital form (email, web pages, blog post, news articles, ebooks and other text files) on internet challenges technology to appropriate retrieval of document as a response for any search query. As a result there has been an eruption of interest in people to mine these vast resources and classify them properly. It invigorates researchers and developers to work on numerous approaches of document clustering. Researchers got keen interest in this problem of text mining. The aim of this chapter is to summarised different document clustering algorithms used by researchers. |
|
|
|
Chapter 14 Cluster Analysis with Various Algorithms for Mixed Data |
|
|
282 | (36) |
|
|
|
Analyzing clustering of mixed data set is a complex problem. Very useful clustering algorithms like k-means, fuzzy c-means, hierarchical methods etc. developed to extract hidden groups from numeric data. In this paper, the mixed data is converted into pure numeric with a conversion method, the various algorithm of numeric data has been applied on various well known mixed datasets, to exploit the inherent structure of the mixed data. Experimental results shows how smoothly the mixed data is giving better results on universally applicable clustering algorithms for numeric data. |
|
|
Compilation of References |
|
318 | (36) |
About the Contributors |
|
354 | (3) |
Index |
|
357 | |