Una M. Kelly, Luuk Spreeuwers and Raymond Veldhuis, Data Management and Biometrics Group, University of Twente, The Netherlands
State-of-the-art face recognition systems (FRS) are vulnerable to morphing attacks, in which two photos of different people are merged in such a way that the resulting photo resembles both people. Such a photo could be used to apply for a passport, allowing both people to travel with the same identity document. Research has so far focussed on developing morphing detection methods. We suggest that it might instead be worthwhile to make face recognition systems themselves more robust to morphing attacks. We show that deep-learning-based face recognition can be improved simply by treating morphed images just like real images during training but also that, for significant improvements, more work is needed. Furthermore, we test the performance of our FRS on morphs of a type not seen during training. This addresses the problem of overfitting to the type of morphs used during training, which is often overlooked in current research.
Biometrics, Morphing Attack Detection, Face Recognition, Vulnerability of Biometric Systems.
Nikola Banic1, Karlo Koscevic2, Marko Subasic2 and Sven Loncaric2, 1Gideon Brothers, 10000 Zagreb, Croatia, 2Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia
Computational color constancy is used in almost all digital cameras to reduce the influence of scene illumination on object colors. Many of the highly accurate published illumination estimation methods use deep learning, which relies on large amounts of images with known ground-truth illuminations. Since the size of the appropriate publicly available training datasets is relatively small, data augmentation is often used also by simulating the appearance of a given image under another illumination. Still, there are practically no reports on any desired properties of such simulated images or on the limits of their usability. In this paper, several experiments for determining some of these properties are proposed and conducted by comparing the behavior of the simplest illumination estimation methods on images of the same scenes obtained under real illuminations and images obtained through data augmentation. The experimental results are presented and discussed.
Color constancy, data augmentation, illumination estimation, image enhancement, white balancing.
F. Sandoval-Ibarra, Cinvestav-Guadalajara Unit, Zapopan, Mexico
This work presents an active filter design using a programmable basic cell. In contrast to traditional design techniques, the cell allows to the designer to configure it to build gain stages and circuits emulating resistors. The proposed cell is a PPN current branch, where the sizing of transistors is done by using the Ohm’s law and needed node voltages given by the designer. The cell’s sizing warranty why, when designing a circuit based on the cell, the connection of them do not alter the operation point of the circuit. SPICE simulations show the usefulness of the cell to design analog circuits. As an example, a single-ended Sallen-&-Key 2nd order band-pass filter is designed and analyzed. The active filter is designed in a standard 0.5µm, ±2.5V CMOS technology. The expected performance of the filter shows the relevance of the circuit analys is based on a design method, and supported on physical principles.
Analog integrated circuits, circuit analysis, MOSFET circuits, active filters.
Yuh-Jen Chen, Department of Accounting and Information Systems, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan, ROC
Financial forecasts are regarded as vital financial information for most enterprises. They not only project the financial performance of an enterprise in a future operating period but also assist internal managers with operations, investment, and financing decision-making and external investors and creditors with understanding the operating performance of the enterprise. However, a financial forecast of an enterprise must be comprehensive to rule out unreasonable assumptions arising from local forecasts. Therefore, finding ways to assist enterprises with producing accurate comprehensive financial forecasts has become a critical issue in research on financial management. In consideration of the financial indicators of financial structure, solvency, operating ability, profitability, and cash flow as well as the non-financial indicators of firm size and corporate governance, the algorithms of multivariate adaptive regression splines (MARS) and queen genetic algorithm-support vector regression (QGA-SVR) are used in this study to create a comprehensive financial forecast of operating revenue, earnings per share, free cash flow, and net working capital to help enterprises forecast their future financial situation and offer investors and creditors a reference for investment decision-making. This study’s objectives are achieved through the following steps: (i) establishment of feature indicators for financial forecasting, (ii) development of a financial forecasting method, and (iii) demonstration of the proposed method and comparison with existing methods.
Financial Forecasting, Multivariate Adaptive Regression Splines (MARS), Queen Genetic Algorithm (QGA), Support Vector Regression (SVR).
Saranya M1 and Geetha T V2, 1Computer Science and Engineering, CEG, Anna University, India, 2Senior Professor, Computer Science and Engineering, CEG, Anna University, India
Now-a-days people around the world are infected by many new diseases. Developing or discovering a new drug for the newly discovered disease is an expensive and time consuming process and these could be eliminated if already existing resources could be used. For identifying the candidates from available drugs we need to perform text mining of a large-scale literature repository to extract the relation between the chemical, target and disease. Computational approaches for identifying the relationships between the entities in biomedical domain are appearing as an active area of research for drug discovery as it requires more man power. Currently, the computational approaches for extracting the biomedical relations such as drug-gene and gene-disease relationships are limited as the construction of drug-gene and gene-disease association from the unstructured biomedical documents is very hard. In this work, we propose pattern based bootstrapping method which is a semi-supervised learning algorithm to extract the direct relations between drug, gene and disease from the biomedical documents. These direct relationships are used to infer indirect relationships between entities such as drug and disease. Now these indirect relationships are used to determine the new candidates for drug repositioning which in turn will reduce the time and the patient’s risk.
Text mining, drug discovery, drug repositioning, bootstrapping, machine learning.
Ram chandra Pal, Dr A.P.J.Abdul Kalam University Indore, India
Social media is one of the biggest forums to express opinions. Sentiment analysis is the procedure by which information is extracted from the opinions, appraisals and emotions of people in regards to entities, events and their attributes. Sentiment analysis is also known as opinion mining. Opinion Mining is to analyses and classifies the user generated data like reviews, blogs, comments, articles etc. The main objective of Opinion mining is Sentiment Classification i.e. to classify the opinion into positive or negative classes. A earlier work is based on star rating of user data, most of the reviews are written in text format. The reviews are in text format which is difficult for computer system to understand. In recent internet applications, which have focused on detecting the polarity of the text, our text classifier helps users distinguish between positive and negative reviews thus assisting the user with opinion Extraction. This could be very useful for web applications like twitter, where the user has to face large chunks of raw data. To classify opinion an unsupervised lexicon technique is used for sentiment classification. There are so many user generated opinions on the web for a product; it may be difficult to know how many opinions are positive or negative. It makes tough to take decision about product purchasing. a sentence level opinion Extraction is used and it is done by counting based approach which compare the opinion by count method. All customer reviews of product need to summarize; we do not summarize the reviews by selecting or rewriting a subset of the original sentence from the reviews.
Machine Learning Algorithms, Opinion Extraction, Web Text, customer reviews.
Pamely Zantou1, Mikaël A. Mousse2 and Bethel Atohoun3, 1Institut de Formation et de Recherche en Informatique, Université d’Abomey-Calavi, Bénin, 2Institut Universitaire de Technologie, Université de Parakou, Bénin, 3Ecole Supérieur de Gestion d’Informatique et des Sciences, Bénin
Visually impaired people need help to travel safely. To make this possible, many travel aids have been designed. Among them, the cane which is considered as a symbol of visual deficiency in the whole world. In this work, we have built an electronic white cane using sensors’ technology. This intelligent cane detects obstacles within 2m on the ground or in height, and sends vocal instructions via a Bluetooth headset. We have also built an Android application to track in real time the visually impaired and a WEB application to control the access to the mobile one.
Electronic white cane, Sensors, human monitoring, smart home.
Zhuang Liu and Yuanping Zhu, School of Computer and Information Engineering, Tianjin Normal University, Tianjin, China
This paper studies the license plate recognition problem under the complex background and the license plate tilt. Existing methods cannot solve these problems well. This paper proposes an end-to-end correction network based on deep learning. The model contains three parts: correction network, residual module and sequence module, which are responsible for distortion of license plate correction, image feature extraction and license plate character recognition. In the experiments, we studied the effects of complex backgrounds such as light, rain and snow, and the inclination and distortion of license plates on the accuracy of license plate recognition. The experimental part of this article uses the Chinese Academy of Sciences CCPD dataset, which covers almost all license plate data in natural scenes. The experimental results show that compared with the existing license plate recognition algorithm, the algorithm in this paper achieves an accuracy improvement on the test project, and it averages about 5% in complex scenarios.
Correction Network, Convolutional Neural Network, License Plate Recognition, Smart Transportation.
Jianyong XUE1, Olivier L. Georgeon1,2 and SalimaHassas1, 1LIRIS CNRS UMR5205, Université Claude Bernard Lyon 1, Lyon, France, 2LBG UMRS 449, Université Catholique de Lyon, Lyon, France
During the initial phase of cognitive development, infants exhibit amazing abilities to generate novel behaviours in unfamiliar situations, and explore actively to learn the best while lacking extrinsic rewards from the environment. These abilities set them apart from even the most advanced autonomous robots. This work seeks to contribute to understand and replicate some of these abilities. We propose the Bottom-up hiErarchical sequential Learning algorithm with Constructivist pAradigm (BEL-CA) to design agents capable of learning autonomously and continuously through interaction. The algorithm implements no assumption about the semantics of input and output data, nor relies upon a model of the world given a priori in the form of a set of states and transitions as well. Besides, we propose a toolkit to analyse the learning process at run time called GAIT (Generating and Analysing Interaction Traces). We use GAIT to report and explain the detailed learning process and the structured behaviours that the agent learns on each decision making. We report an experiment in which the agent learned to successfully interact with its environment and to avoid unfavourable interactions using regularities discovered through interaction.
cognitive development, constructivist learning, hierarchical sequential learning, self-adaptation.
A. V. H Sai Prasad1*, Dr. G. V. S. Rajkumar2, 1Research Scholar, Department of Computer Science and Engineering, GITAM Institute of Technology, 2Professor, Department of Computer science and Engineering, GITAM Institute of Technology, GITAM (Deemed to be University), Visakhapatnam - 530045, India
In the internet, a number of services have become flexible and cost-effective because of cloud computing. Security is the major hitch in cloud computing and many researchers have studied and discussed the problems relating to this issue. Various techniques are requiringensuring the integrity of data which is the integral part of cloud storage adoption. Five different trust attributes are collected from third party and its trust model in this work and integrity of data are assured through the servers.For optimal scheduling Ant Lion Optimizer (ALO) algorithm is used which is proposed and contrasted with Particle Swarm Optimization (PSO).
Cloud computing, data integrity, third party trust model, Particle Swarm Optimization (PSO) and Ant Lion Optimizer (ALO) Algorithm.
Ruben Ventura, Independent Security Researcher
This paper presents new and evolved methods to perform Blind SQL Injection attacks. These are much faster than the current publicly available tools and techniques due to various reasons. Implementing these methods within carefully crafted code has resulted in the development of the fastest tools in the world to extract information from a database through Blind SQL Injection vulnerabilities. The nature of such attack vectors will be explained in this paper, including all of their intrinsic details.
Web Application Security, Blind SQL Injection, Attack Optimization, New Exploitation Methods.
Ravi Yadav, Rajkumar Yadav and Satender Bal Gupta, Indira Gandhi University, Meerpur, Rewari, India
Stability and Unstability are main characteristics of every sorting algorithm. A sorting algorithm can either be stable or unstable based on some conditions. Mainly stable sort includes Bubble sort, Insertion sort, Merge sort and unstable sort includes Heap sort, Selection sort, Quick sort. There are various Literature that compared these sorting algorithms on the basis of platform dependent factors like Space complexity but very few researchers had compared on the basis of platform independent factors. This study Compares these stable and unstable algorithm on the basis of factors like in-place, data sensitivity, time complexity: best, average, worst case as well as platform dependent factors. The code were implemented in MATLAB and Timeit() function is used to calculate elapsed time. If anyone wants to maintain the first come first serve order in sorting of data then stable sort is used. The outcome of research shows that in terms of elapsed time insertion sort is fastest stable sort when input data is small but it increase merge sort will be the fastest algorithm. In case of unstable algorithm, selection sort will be fastest if input data is small and heap sort is fastest as the size of input data increases.
Stable Sort, Unstable sort, Elapsed Time.
Hussein Aly, Abdelmonem Mohamed, Abdelkarim Erradi, Ahmed Bensaid, Department of Computer Science and Engineering College of Engineering, Qatar University Doha, Qatar
Simulation of Urban Mobility (SUMO) lacks the integration of public transport routes, stop locations, and schedules published by public transport operators in GTFS (General Transit Feed Specification) data format. Integrating such data enables simulating public transport mobility using realistic scenarios. However, there is no feature of SUMO to load GTFS data directly. This paper presents and evaluates a tool for integrating SUMO with GTFS data to ease simulating travel journeys on public transport. The tool was later evaluated on a real case study of Doha, Qatar to illustrate its efficiency.
SUMO, GTFS, Traffic simulation, Doha public transport.
Dunbo Cai, Zhiguo Huang and Ling Qian, Department of Innovation Center, China Mobile (Suzhou) Software, Suzhou, China
Named entity recognition (NER) in natural language processing (NLP) considers the problem of identifying a sequence of words in a sentence text that mentions a predefined type of object (entity), e.g., person, organization, location, or time. NER methods are keys in extracting knowledge from texts as entities are fundamental for attaching entity properties or entity relations. However, NER for texts in Chinese is trickier due to that some auxiliary words maybe dropped in a sentence, which is a common phenomenon in Chinese writing for brevity. A usually dropped Chinese word is ‘的’ (often functions as the word ‘of’ in English). One obvious effect of this kind of omitting is bring difficulty in identifying the sub-entities (or nested named entities) contained in a named entity. Previous works considers the effected of recovering dropped pronouns in the Chinese translation task. Here we proposed an rule-based method to rover the auxiliary word ‘的’ for Chinese text, and study the effect of this recovery on the performance of a state-of-the-art Chinese NER method FLAT. Experimental results on Weibo-NER and MSRA-NER datasets shows that our method improves on FLAT. This study thus highlights the promising of recovering more types of dropped words for Chinese NER problem.
Natural Language Processing, Named Entity Recognition, Deep Learning, Dropped Words Recovery.
Lahouaoui Lalaoui1 and Fouad Dib2, 1Dept electronics, Laboratory LGE department electronics University of M’sila 28000 City Ichbilia M’sila, Algeria, 2Dept electronics, Laboratory LGE department electronics University of M’sila 28000 City Ichbilia M’sila, Algeria
In this paper, we presented comparative methods for image segmentation. There are several existing techniques, which used for image segmentation. These all techniques have their own importance. These all techniques can be approached from two basic approaches of segmentation i.e. region based or edge based approaches. Every technique can applied on different images to perform required segmentation. These all techniques also can classified into three categories The Segmentation of different modality images is an important step in forming realistic tissue models. Current segmentation approaches reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. To assist in classifying the relevant literature, there many methods for image segmentation image, we used a method witch based region segmentation. Segmentation of medical images is an important step in forming realistic tissue models. Segmentation of the image is an image processing operation, particularly in the medical field. Diagnostic imaging is an invaluable tool in medicine today. Magnetic resonance imaging (MRI), computed tomography (CT), digital mammography, and other imaging modalities provide an effective means for noninvasively mapping the anatomy of a subject. The segmentation of medical images is of paramount importance in the diagnosis and detection of various pathologies. We present in this paper a comparative study of segmentation methods by region such Fuzzy C-Means, K-Means, Mean shift and EM, where the results obtained are evaluated by three criteria: IntraInter_LN, Intra_LN, CritAtt, we used medical images base and x-ray image Ultra Sound. The diversity of segmentation methods offers us several ways to segment the image. Always look for the EM method to get good results.
Image Segmentation, Modality Image, Criteria, Evaluation.
Björn Friedrich, Enno-Edzard Steen, Sebastian Fudickar and Andreas Hein, Department of Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
A continuous monitoring of the physical strength and mobility of elderly people is important for maintaining their health and treating diseases at an early stage. However, frequent screenings by physicians are exceeding the logistic capacities. An alternate approach is the automatic and unobtrusive collection of functional measures by ambient sensors. In the current publication, we show the correlation among data of ambient motion sensors and the well-established mobility assessment Short-Physical-Performance-Battery and Tinetti. We use the average number of motion sensor events for correlation with the assessment scores. The evaluation on a real-world dataset shows a moderate to strong correlation with the scores of standardised geriatrics physical assessments.
ubiquitous computing, biomedical informatics, health, correlation, piecewise linear approximation.
Cao Xiaopeng and Qu Hongyan, School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an, China
The massive network traffic and high-dimensional features affect detection performance. In order to improve the efficiency and performance of detection, whale optimization sparse autoencoder model (WO-SAE)isproposed. Firstly, sparse autoencoder performs unsupervised training on high-dimensional raw data and extracts low- dimensional features of network traffic. Secondly, the key parameters of sparse autoencoder are optimized automatically by whale optimization algorithm to achieve better feature extraction ability. Finally, gated recurrent unit is used to classify the time series data. The experimental results show that the proposed model is superior to existing detection algorithms in accuracy, precision, and recall. And the accuracypresents 98.69%. WO-SAE model is a novel approach that reduces the user’s reliance on deep learning expertise.
Traffic anomaly detection, Feature extraction, Sparse autoencoder, Whale optimization algorithm.
Cao Xiaopeng and Shi Linkai, School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an, China
The practical Byzantine fault-tolerant algorithm does not add nodes dynamically. It is limited in practical application. In order to add nodes dynamically, Dynamic Practical Byzantine Fault Tolerance Algorithm (DPBFT) was proposed. Firstly, a new node sends request information to other nodes in the network. The nodes in the network decide their identities and requests. Then the nodes in the network reverse connect to the new node and send block information of the current network, the new node updates information. Finally, the new node participates in the next round of consensus, changes the view and selects the master node. This paper abstracts the decision of nodes into the undirected connected graph. The final consistency of the graph is used to prove that the proposed algorithm can adapt to the network dynamically.Compared with the PBFT algorithm, DPBFT has better fault tolerance and lower network bandwidth.
Practical Byzantine Fault Tolerance, Blockchain, Consensus Algorithm, Consistency Analysis.
Morio Yamauchi1, Kazuhisa Naakano2, Yoshiya Tanaka2 And Keiichi Horio1, 1Department of Human Intelligence Systems, Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu, Japan, 2The First Department of International Medicine, University of Occupational and Environmental Health, Kitakyushu, Japan
In this article, we implemented a regression model and conducted experiments for predicting disease activity using data from 1929 rheumatoid arthritis patients to assist in the selection of biologics for rheumatoid arthritis. On modelling, the missing variables in the data were completed by three different methods, mean value, self-organizing map and random value. Experimental results showed that the prediction error of the regression model was large regardless of the missing completion method, making it difficult to predict the prognosis of rheumatoid arthritis patients.
Rheumatoid Arthritis, Gaussian Process Regression, Self-Organizing Map.
Alla M. Eid, Amgad A. Salama, and Hassan M. Elkamchouchi, The authors are with electrical Engineering Department, Faculty of Engineering, Alexandria University, Egypt
In this paper, a novel [1x2] traveling-wave slotted spiral antenna array based on substrate integrated waveguide (SIW) at Ku band has been illustrated. The proposed array characterized by low profile, wide axial ratio bandwidth (ARBW), and wide return loss bandwidth (RLBW). To get wide RLBW, wide ARBW as well as maximum directivity, two arms archemedian spiral shape has been proposed and investigated. The fabrication results of the two arms archemedian spiral shape elements were as follows, RLBW more than 18%, 12% boresight ARBW. A [1x2] has been designed and fabricated, in which a good agreement between The measurement results and the simulated one. The measurement results accomplished about 28% ,9.4% RLBW, and ARBW, respectively. More, the directivity of the proposed antenna array is 9 dBi.
slotted antenna, spiral antenna, SIW ANTENNA, two arms archimedean antenna.
Iván Humberto Fuentes Chab1, Damián Uriel Rosado Castellanos1, Olivia Graciela Fragoso Diaz2 and Ivette Stephany Pacheco Farfán1, 1Department of Computer Systems Engineering, Higher Technological Institute of Escárcega, Campeche, México, 2Computer Science Department, National Center for Technological Research and Development (CENIDET), Cuernavaca, México
A serious videogame is a practical and simple way to get the student to learn about a complex subject, such as performing integrals, applying first aid, or even getting children to learn to read and write in their native language or another language. Therefore, to develop a serious videogame, you must have a guide containing the basic or necessary elements of its software components to be considered. This document presents a quality model to evaluate the playability, taking the attributes of usability and understandability at the level of software components. This model can serve as parameters to measure the quality of the software product of the Serious Videogames before and during its development, providing a margin with the primordial elements that a Serious Videogame must have so that the players reach the desired objective of learning while playing.
Quality Model, Serious Videogames, Playability Metrics.
Mr. Mahabaleshwar Kabbur and Dr. V. Arul Kumar, School of Computer Science & Applications REVA University, Bengaluru-64, Karnataka, India
Vehicular Ad-hoc network (VANET) has gained huge attraction from research community due to their significant nature of providing the autonomous vehicular communication. The efficient communication is considered as prime concern in these networks however, several techniques have been introduced to improve the overall communication of VANETs. Security and privacy are also considered as prime aspects of VANETs. Maintaining data security and privacy is highly dynamic VANETs is a challenging task. Several techniques have been introduced recently which are based on the cryptography and key exchange. However, these techniques provide solution to limited security threats. Hence, in this work we introduce a novel approach for key management and distribution in VANET to provide the security to the network and its components. Later, we incorporated cryptography approach to secure the data packets. Hence, the proposed approach is named as Secure Group Key Management and Cryptography (SGKC). The experimental study shows significant improvements in the network performance.
Network Protocols, Wireless Network, Mobile Network, Virus, Security, Attcks.
Jianwei Li, Qingqing Gangstar and Xiaoming Wang, Jinan University, Department of Information science and technology, Guangzhou, China
Searchable encryption (SE) allows client to outsource personal data to an untrusted server while protecting the data privacy, which is widely used by corporations and individuals. Recently, many works show that forward privacy is the fundamental property for secure SE cryptosystem, and several forward secure SE schemes have been proposed. However, most forward secure schemes are focus on the single-keyword query, which limit their wide application in cloud computing. In this paper, we propose an efficient forward secure searchable encryption scheme supporting multi-keyword query. Our scheme involves two new storage structures as temporary map and search tree. Specifically, when the client uploads multiple files with the same keyword, our scheme uses an encryption key to replace multiple encryption keys in the existing schemes, so that improves query efficiency. Moreover, our scheme overcomes the shortcoming that the query complexity increases linearly with the number of updated files in the existing schemes. We prove our scheme is secure with forward privacy under the random oracle model. The experimental results show that our scheme is more efficient than the existing schemes.
Cloud computing, Searchable encryption, Data outsourcing, Forward secure, Multi-keyword.
KHLOUD ALMUTAIRI1 Ahmed Ismail2 Samir Abdlerazek1, Hazem Elbakry1, 1Information Systems Department, Faculty of Computers and Information, Mansoura University, Egypt, 2Research and development dep., GlaxyTech, Munich, Germany
Healthcare is an important aspect of human lives and anecessity to be provided to all members. With the advancement of technology, mobile applications are omnipresent.Mobile applications presence has increased in many foldsand is common in smartphones, tablets, and PDA’s. Healthcare solutions based on IoT, big data, and machine learning are one of the hottest research topics nowadays. In the healthcare area, there are many available solutions based on data science, big data analysis, IoT connections, machine learning, and data mining techniques. Machine learning techniques such as SVM, DTW, and others are presented. This paper introduces a background of recent healthcare solutions using the classification method. Some of them may use one technique, but others can make a combination of two techniques.
Wearables, Health data, MIoT, smart devices, E-health.