Una M. Kelly, Luuk Spreeuwers and Raymond Veldhuis, Data Management and Biometrics Group, University of Twente, The Netherlands
State-of-the-art face recognition systems (FRS) are vulnerable to morphing attacks, in which two photos of different people are merged in such a way that the resulting photo resembles both people. Such a photo could be used to apply for a passport, allowing both people to travel with the same identity document. Research has so far focussed on developing morphing detection methods. We suggest that it might instead be worthwhile to make face recognition systems themselves more robust to morphing attacks. We show that deep-learning-based face recognition can be improved simply by treating morphed images just like real images during training but also that, for significant improvements, more work is needed. Furthermore, we test the performance of our FRS on morphs of a type not seen during training. This addresses the problem of overfitting to the type of morphs used during training, which is often overlooked in current research.
Biometrics, Morphing Attack Detection, Face Recognition, Vulnerability of Biometric Systems.
Nikola Banic1, Karlo Koscevic2, Marko Subasic2 and Sven Loncaric2, 1Gideon Brothers, 10000 Zagreb, Croatia, 2Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia
Computational color constancy is used in almost all digital cameras to reduce the influence of scene illumination on object colors. Many of the highly accurate published illumination estimation methods use deep learning, which relies on large amounts of images with known ground-truth illuminations. Since the size of the appropriate publicly available training datasets is relatively small, data augmentation is often used also by simulating the appearance of a given image under another illumination. Still, there are practically no reports on any desired properties of such simulated images or on the limits of their usability. In this paper, several experiments for determining some of these properties are proposed and conducted by comparing the behavior of the simplest illumination estimation methods on images of the same scenes obtained under real illuminations and images obtained through data augmentation. The experimental results are presented and discussed.
Color constancy, data augmentation, illumination estimation, image enhancement, white balancing.
F. Sandoval-Ibarra, Cinvestav-Guadalajara Unit, Zapopan, Mexico
This work presents an active filter design using a programmable basic cell. In contrast to traditional design techniques, the cell allows to the designer to configure it to build gain stages and circuits emulating resistors. The proposed cell is a PPN current branch, where the sizing of transistors is done by using the Ohm’s law and needed node voltages given by the designer. The cell’s sizing warranty why, when designing a circuit based on the cell, the connection of them do not alter the operation point of the circuit. SPICE simulations show the usefulness of the cell to design analog circuits. As an example, a single-ended Sallen-&-Key 2nd order band-pass filter is designed and analyzed. The active filter is designed in a standard 0.5µm, ±2.5V CMOS technology. The expected performance of the filter shows the relevance of the circuit analys is based on a design method, and supported on physical principles.
Analog integrated circuits, circuit analysis, MOSFET circuits, active filters.
Xiang-Song Zhang1, Wei-Xin Gao1, Shi-Ling Zhu2, 1College of Electronic Engineering, Xi'an Shiyou University, Xi’an, China, 2Communication and Information Engineering,Xi’an University of Post and Telecommunications, Xi’an, China
In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.
X-ray image, Mixed noise, Noise separation, noise reduction, image enhancement.
Yuh-Jen Chen, Department of Accounting and Information Systems, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan, ROC
Financial forecasts are regarded as vital financial information for most enterprises. They not only project the financial performance of an enterprise in a future operating period but also assist internal managers with operations, investment, and financing decision-making and external investors and creditors with understanding the operating performance of the enterprise. However, a financial forecast of an enterprise must be comprehensive to rule out unreasonable assumptions arising from local forecasts. Therefore, finding ways to assist enterprises with producing accurate comprehensive financial forecasts has become a critical issue in research on financial management. In consideration of the financial indicators of financial structure, solvency, operating ability, profitability, and cash flow as well as the non-financial indicators of firm size and corporate governance, the algorithms of multivariate adaptive regression splines (MARS) and queen genetic algorithm-support vector regression (QGA-SVR) are used in this study to create a comprehensive financial forecast of operating revenue, earnings per share, free cash flow, and net working capital to help enterprises forecast their future financial situation and offer investors and creditors a reference for investment decision-making. This study’s objectives are achieved through the following steps: (i) establishment of feature indicators for financial forecasting, (ii) development of a financial forecasting method, and (iii) demonstration of the proposed method and comparison with existing methods.
Financial Forecasting, Multivariate Adaptive Regression Splines (MARS), Queen Genetic Algorithm (QGA), Support Vector Regression (SVR).
Saranya M1 and Geetha T V2, 1Computer Science and Engineering, CEG, Anna University, India, 2Senior Professor, Computer Science and Engineering, CEG, Anna University, India
Now-a-days people around the world are infected by many new diseases. Developing or discovering a new drug for the newly discovered disease is an expensive and time consuming process and these could be eliminated if already existing resources could be used. For identifying the candidates from available drugs we need to perform text mining of a large-scale literature repository to extract the relation between the chemical, target and disease. Computational approaches for identifying the relationships between the entities in biomedical domain are appearing as an active area of research for drug discovery as it requires more man power. Currently, the computational approaches for extracting the biomedical relations such as drug-gene and gene-disease relationships are limited as the construction of drug-gene and gene-disease association from the unstructured biomedical documents is very hard. In this work, we propose pattern based bootstrapping method which is a semi-supervised learning algorithm to extract the direct relations between drug, gene and disease from the biomedical documents. These direct relationships are used to infer indirect relationships between entities such as drug and disease. Now these indirect relationships are used to determine the new candidates for drug repositioning which in turn will reduce the time and the patient’s risk.
Text mining, drug discovery, drug repositioning, bootstrapping, machine learning.
Ram chandra Pal, Dr A.P.J.Abdul Kalam University Indore, India
Social media is one of the biggest forums to express opinions. Sentiment analysis is the procedure by which information is extracted from the opinions, appraisals and emotions of people in regards to entities, events and their attributes. Sentiment analysis is also known as opinion mining. Opinion Mining is to analyses and classifies the user generated data like reviews, blogs, comments, articles etc. The main objective of Opinion mining is Sentiment Classification i.e. to classify the opinion into positive or negative classes. A earlier work is based on star rating of user data, most of the reviews are written in text format. The reviews are in text format which is difficult for computer system to understand. In recent internet applications, which have focused on detecting the polarity of the text, our text classifier helps users distinguish between positive and negative reviews thus assisting the user with opinion Extraction. This could be very useful for web applications like twitter, where the user has to face large chunks of raw data. To classify opinion an unsupervised lexicon technique is used for sentiment classification. There are so many user generated opinions on the web for a product; it may be difficult to know how many opinions are positive or negative. It makes tough to take decision about product purchasing. a sentence level opinion Extraction is used and it is done by counting based approach which compare the opinion by count method. All customer reviews of product need to summarize; we do not summarize the reviews by selecting or rewriting a subset of the original sentence from the reviews.
Machine Learning Algorithms, Opinion Extraction, Web Text, customer reviews.
Pamely Zantou1, Mikaël A. Mousse2 and Bethel Atohoun3, 1Institut de Formation et de Recherche en Informatique, Université d’Abomey-Calavi, Bénin, 2Institut Universitaire de Technologie, Université de Parakou, Bénin, 3Ecole Supérieur de Gestion d’Informatique et des Sciences, Bénin
Visually impaired people need help to travel safely. To make this possible, many travel aids have been designed. Among them, the cane which is considered as a symbol of visual deficiency in the whole world. In this work, we have built an electronic white cane using sensors’ technology. This intelligent cane detects obstacles within 2m on the ground or in height, and sends vocal instructions via a Bluetooth headset. We have also built an Android application to track in real time the visually impaired and a WEB application to control the access to the mobile one.
Electronic white cane, Sensors, human monitoring, smart home.
Shweta kumawat1, Tanveer Habib Sardar2, 1Department of Computer science and Engineering, Jain University, Kanakapura, Karnataka, India, 2Assistant Professor, Department of computer science and Engineering, Jain University, Kanakapura, Karnataka, India
Satellite imagery provides the initial data information in cyclone detection and forecasting.To mitigate the damages caused by cyclones, wecoached interpolation and data augmentation approaches for enhancing the temporary resolution and modification of attributes in a specific dataset. Algorithm needs a classical approach during pre-preparation steps. Using 14 distinct constraint optimization techniques on three optical flow methods estimations are tested here internally. A deep learning pattern model is upskilled and examined within contrived densification and categorized storm data for cyclone identification and pinpointing the cyclone vortex yielding at least 90% accuracy.The work analyzes two remote sensing data consist of QuikSCAT satellite information and incorporated precipitation data from TRMM with various satellites for feature extraction. Result and analysis shows that the methodology met the objective of the project.
Regression, Interpolation, optical data, Cyclone intensity, Convolutional Neural Network.
Shruti Wadhwa1, and Karuna Babber2, 1Chief Operating Officer, Nidus Technologies Pvt. Ltd., Chandigarh, India, 2Assistant Professor, Post Graduate Government College, Chandigarh, India
Twitter sentimental analysis is the way to examine polarity in tweeted opinions. The computational process involves implementing machine learning classifiers to categorize the tweets into positive, negative and neutral sentiments. To identify a suitable classifier for the task is a prime issue. In this paper we have presented the performance comparison of base classification techniques like Decision Tree, Random Forest, Naive Bayes, K-Nearest Neighbour and Logistic Regression on analysis of tweets. The results thus obtained show Logistic Regression analyze tweets with highest accuracy rate of 86.51% and the least performer comes out to be K-Nearest Neighbour with an average accuracy rate of 50.40%.
Twitter, sentimental analysis, machine learning, classifiers and algorithms.
Zhuang Liu and Yuanping Zhu, School of Computer and Information Engineering, Tianjin Normal University, Tianjin, China
This paper studies the license plate recognition problem under the complex background and the license plate tilt. Existing methods cannot solve these problems well. This paper proposes an end-to-end correction network based on deep learning. The model contains three parts: correction network, residual module and sequence module, which are responsible for distortion of license plate correction, image feature extraction and license plate character recognition. In the experiments, we studied the effects of complex backgrounds such as light, rain and snow, and the inclination and distortion of license plates on the accuracy of license plate recognition. The experimental part of this article uses the Chinese Academy of Sciences CCPD dataset, which covers almost all license plate data in natural scenes. The experimental results show that compared with the existing license plate recognition algorithm, the algorithm in this paper achieves an accuracy improvement on the test project, and it averages about 5% in complex scenarios.
Correction Network, Convolutional Neural Network, License Plate Recognition, Smart Transportation.
Jianyong XUE1, Olivier L. Georgeon1,2 and SalimaHassas1, 1LIRIS CNRS UMR5205, Université Claude Bernard Lyon 1, Lyon, France, 2LBG UMRS 449, Université Catholique de Lyon, Lyon, France
During the initial phase of cognitive development, infants exhibit amazing abilities to generate novel behaviours in unfamiliar situations, and explore actively to learn the best while lacking extrinsic rewards from the environment. These abilities set them apart from even the most advanced autonomous robots. This work seeks to contribute to understand and replicate some of these abilities. We propose the Bottom-up hiErarchical sequential Learning algorithm with Constructivist pAradigm (BEL-CA) to design agents capable of learning autonomously and continuously through interaction. The algorithm implements no assumption about the semantics of input and output data, nor relies upon a model of the world given a priori in the form of a set of states and transitions as well. Besides, we propose a toolkit to analyse the learning process at run time called GAIT (Generating and Analysing Interaction Traces). We use GAIT to report and explain the detailed learning process and the structured behaviours that the agent learns on each decision making. We report an experiment in which the agent learned to successfully interact with its environment and to avoid unfavourable interactions using regularities discovered through interaction.
cognitive development, constructivist learning, hierarchical sequential learning, self-adaptation.
A. V. H Sai Prasad1*, Dr. G. V. S. Rajkumar2, 1Research Scholar, Department of Computer Science and Engineering, GITAM Institute of Technology, 2Professor, Department of Computer science and Engineering, GITAM Institute of Technology, GITAM (Deemed to be University), Visakhapatnam - 530045, India
In the internet, a number of services have become flexible and cost-effective because of cloud computing. Security is the major hitch in cloud computing and many researchers have studied and discussed the problems relating to this issue. Various techniques are requiringensuring the integrity of data which is the integral part of cloud storage adoption. Five different trust attributes are collected from third party and its trust model in this work and integrity of data are assured through the servers.For optimal scheduling Ant Lion Optimizer (ALO) algorithm is used which is proposed and contrasted with Particle Swarm Optimization (PSO).
Cloud computing, data integrity, third party trust model, Particle Swarm Optimization (PSO) and Ant Lion Optimizer (ALO) Algorithm.
Ruben Ventura, Independent Security Researcher
This paper presents new and evolved methods to perform Blind SQL Injection attacks. These are much faster than the current publicly available tools and techniques due to various reasons. Implementing these methods within carefully crafted code has resulted in the development of the fastest tools in the world to extract information from a database through Blind SQL Injection vulnerabilities. The nature of such attack vectors will be explained in this paper, including all of their intrinsic details.
Web Application Security, Blind SQL Injection, Attack Optimization, New Exploitation Methods.
Ravi Yadav, Rajkumar Yadav and Satender Bal Gupta, Indira Gandhi University, Meerpur, Rewari, India
Stability and Unstability are main characteristics of every sorting algorithm. A sorting algorithm can either be stable or unstable based on some conditions. Mainly stable sort includes Bubble sort, Insertion sort, Merge sort and unstable sort includes Heap sort, Selection sort, Quick sort. There are various Literature that compared these sorting algorithms on the basis of platform dependent factors like Space complexity but very few researchers had compared on the basis of platform independent factors. This study Compares these stable and unstable algorithm on the basis of factors like in-place, data sensitivity, time complexity: best, average, worst case as well as platform dependent factors. The code were implemented in MATLAB and Timeit() function is used to calculate elapsed time. If anyone wants to maintain the first come first serve order in sorting of data then stable sort is used. The outcome of research shows that in terms of elapsed time insertion sort is fastest stable sort when input data is small but it increase merge sort will be the fastest algorithm. In case of unstable algorithm, selection sort will be fastest if input data is small and heap sort is fastest as the size of input data increases.
Stable Sort, Unstable sort, Elapsed Time.
Hussein Aly, Abdelmonem Mohamed, Abdelkarim Erradi, Ahmed Bensaid, Department of Computer Science and Engineering College of Engineering, Qatar University Doha, Qatar
Simulation of Urban Mobility (SUMO) lacks the integration of public transport routes, stop locations, and schedules published by public transport operators in GTFS (General Transit Feed Specification) data format. Integrating such data enables simulating public transport mobility using realistic scenarios. However, there is no feature of SUMO to load GTFS data directly. This paper presents and evaluates a tool for integrating SUMO with GTFS data to ease simulating travel journeys on public transport. The tool was later evaluated on a real case study of Doha, Qatar to illustrate its efficiency.
SUMO, GTFS, Traffic simulation, Doha public transport.
Dunbo Cai, Zhiguo Huang and Ling Qian, Department of Innovation Center, China Mobile (Suzhou) Software, Suzhou, China
Named entity recognition (NER) in natural language processing (NLP) considers the problem of identifying a sequence of words in a sentence text that mentions a predefined type of object (entity), e.g., person, organization, location, or time. NER methods are keys in extracting knowledge from texts as entities are fundamental for attaching entity properties or entity relations. However, NER for texts in Chinese is trickier due to that some auxiliary words maybe dropped in a sentence, which is a common phenomenon in Chinese writing for brevity. A usually dropped Chinese word is ‘的’ (often functions as the word ‘of’ in English). One obvious effect of this kind of omitting is bring difficulty in identifying the sub-entities (or nested named entities) contained in a named entity. Previous works considers the effected of recovering dropped pronouns in the Chinese translation task. Here we proposed an rule-based method to rover the auxiliary word ‘的’ for Chinese text, and study the effect of this recovery on the performance of a state-of-the-art Chinese NER method FLAT. Experimental results on Weibo-NER and MSRA-NER datasets shows that our method improves on FLAT. This study thus highlights the promising of recovering more types of dropped words for Chinese NER problem.
Natural Language Processing, Named Entity Recognition, Deep Learning, Dropped Words Recovery.
Lahouaoui Lalaoui1 and Fouad Dib2, 1Dept electronics, Laboratory LGE department electronics University of M’sila 28000 City Ichbilia M’sila, Algeria, 2Dept electronics, Laboratory LGE department electronics University of M’sila 28000 City Ichbilia M’sila, Algeria
In this paper, we presented comparative methods for image segmentation. There are several existing techniques, which used for image segmentation. These all techniques have their own importance. These all techniques can be approached from two basic approaches of segmentation i.e. region based or edge based approaches. Every technique can applied on different images to perform required segmentation. These all techniques also can classified into three categories The Segmentation of different modality images is an important step in forming realistic tissue models. Current segmentation approaches reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. To assist in classifying the relevant literature, there many methods for image segmentation image, we used a method witch based region segmentation. Segmentation of medical images is an important step in forming realistic tissue models. Segmentation of the image is an image processing operation, particularly in the medical field. Diagnostic imaging is an invaluable tool in medicine today. Magnetic resonance imaging (MRI), computed tomography (CT), digital mammography, and other imaging modalities provide an effective means for noninvasively mapping the anatomy of a subject. The segmentation of medical images is of paramount importance in the diagnosis and detection of various pathologies. We present in this paper a comparative study of segmentation methods by region such Fuzzy C-Means, K-Means, Mean shift and EM, where the results obtained are evaluated by three criteria: IntraInter_LN, Intra_LN, CritAtt, we used medical images base and x-ray image Ultra Sound. The diversity of segmentation methods offers us several ways to segment the image. Always look for the EM method to get good results.
Image Segmentation, Modality Image, Criteria, Evaluation.
Seyed Mohssen Ghafari, Richard Nichol and Richard A. George, Faethm AI Company, Sydney, Australia
More than twenty million people have been infected by COVID19 and more than half a million of them are died. A big challenge for health systems around the world is to supply the demand of ventilators and Intensive Care Unit (ICU) beds for those with the worst effects of the infection. Unfortunately, during the COVID-19 pandemic, many countries face ICU beds shortages with respect to the high number of infected people. Hence, healthcare providers have to follow predefined strategies to allocate the available ICU beds in the most efficient way. In this occasion, the physicians and health workers who swore to the Hippocratic oath, to treat the ill to the best of their ability, would have to decide to not save some of their patients. This decision put physicians in a stressful ethically and emotionally challenging state. In this paper, we propose an automatic approach for managing ICU beds in hospitals to i) have the most effective resource allocation in the health systems, and ii) to relieve physicians of making decisions in this regard. The experimental results demonstrates the effectiveness of our approach.
COVID-19, Resource Allocation, ICU Beds, Random Forest.
Darwis Robinson Manalu1, 2, Muhammad Zarlis1, Herman Mawengkang1, Opim Salim Sitompul1, 1Program Studi Doktor (S3) Ilmu Komputer, Fakultas Ilmu Komputer dan Teknologi Informasi, Universitas Sumatera Utara, Medan, North Sumatera-20222, Indonesia, 2Universitas Methodist Indonesia, Medan, Sumatera Utara, Indonesia
Forest fires are a major environmental issue, creating economical and ecological damage while dangering human lives. The investigation and survey for forest fire had been done in Aek Godang, Northern Sumatera, Indonesia. There is 26 hotspot in 2017 close to Aek Godang, North Sumatera, Indonesia. In this study, we use a data mining approach to train and test the data of forest fire and the Fire Weather Index (FWI) from meteorological data. The aim of this study to predict the burned area and identify the forest fire in Aek Godang areas, North Sumatera. The result of this study indicated that Fire fighting and prevention activity may be one reason for the observed lack of correlation. The fact that this dataset exists indicates that there is already some effort going into fire prevention.
Forest fire, Fire Weather Index, Support Vector Machine, Machine Learning.
Björn Friedrich, Enno-Edzard Steen, Sebastian Fudickar and Andreas Hein, Department of Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
A continuous monitoring of the physical strength and mobility of elderly people is important for maintaining their health and treating diseases at an early stage. However, frequent screenings by physicians are exceeding the logistic capacities. An alternate approach is the automatic and unobtrusive collection of functional measures by ambient sensors. In the current publication, we show the correlation among data of ambient motion sensors and the well-established mobility assessment Short-Physical-Performance-Battery and Tinetti. We use the average number of motion sensor events for correlation with the assessment scores. The evaluation on a real-world dataset shows a moderate to strong correlation with the scores of standardised geriatrics physical assessments.
ubiquitous computing, biomedical informatics, health, correlation, piecewise linear approximation.
Cao Xiaopeng and Qu Hongyan, School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an, China
The massive network traffic and high-dimensional features affect detection performance. In order to improve the efficiency and performance of detection, whale optimization sparse autoencoder model (WO-SAE)isproposed. Firstly, sparse autoencoder performs unsupervised training on high-dimensional raw data and extracts low- dimensional features of network traffic. Secondly, the key parameters of sparse autoencoder are optimized automatically by whale optimization algorithm to achieve better feature extraction ability. Finally, gated recurrent unit is used to classify the time series data. The experimental results show that the proposed model is superior to existing detection algorithms in accuracy, precision, and recall. And the accuracypresents 98.69%. WO-SAE model is a novel approach that reduces the user’s reliance on deep learning expertise.
Traffic anomaly detection, Feature extraction, Sparse autoencoder, Whale optimization algorithm.
Cao Xiaopeng and Shi Linkai, School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an, China
The practical Byzantine fault-tolerant algorithm does not add nodes dynamically. It is limited in practical application. In order to add nodes dynamically, Dynamic Practical Byzantine Fault Tolerance Algorithm (DPBFT) was proposed. Firstly, a new node sends request information to other nodes in the network. The nodes in the network decide their identities and requests. Then the nodes in the network reverse connect to the new node and send block information of the current network, the new node updates information. Finally, the new node participates in the next round of consensus, changes the view and selects the master node. This paper abstracts the decision of nodes into the undirected connected graph. The final consistency of the graph is used to prove that the proposed algorithm can adapt to the network dynamically.Compared with the PBFT algorithm, DPBFT has better fault tolerance and lower network bandwidth.
Practical Byzantine Fault Tolerance, Blockchain, Consensus Algorithm, Consistency Analysis.
Mathias Mujinga, University of South Africa, South Africa
Cloud computing is an emerging information technology (IT) paradigm which undoubtedly has received significant attention from different spheres of influence. The cloud computing concept entails the provisioning of computing resources from remote locations to individuals or organizations over the internet at anytime. The technology is associated with several benefits but the reduction in capital and operational expenditure is the major one. The importance of IT innovations to Small and Medium Enterprises (SMEs) and the role of SMEs in economic growth have been demonstrated empirically. However, literature indicate that SMEs in developing economies are not taking the advantages of emerging technologies like cloud computing to improve operational excellence within their or-ganizations. An in-depth review of academic publications, technical reports, and industry white papers on cloud computing has confirmed security and privacy is-sues as some of the key inhibitors to cloud computing adoption among SMEs. This is probably because the customer data is released to remote locations, which are invisible, and data is accessed over the internet, which is already over-whelmed with malicious attackers. This study explores the reasons for the lack of cloud computing adoption among SMEs in developing economies. When the rea-sons for this lag are identified, cloud computing service providers, technology policy makers, entrepreneurs and SMEs executives will be provided with the op-portunity to develop appropriate solutions and strategies that will specifically meet the needs of SMEs in developing economies. This will promote and accelerate the acceptance rate of cloud services. When SMEs utilize cloud computing as an IT strategy, business growth becomes inevitable and the country’s economic growth is witnessed.
Cloud Computing, SMEs, Developing Economies, Cloud Computing Adoption.
Morio Yamauchi1, Kazuhisa Naakano2, Yoshiya Tanaka2 And Keiichi Horio1, 1Department of Human Intelligence Systems, Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu, Japan, 2The First Department of International Medicine, University of Occupational and Environmental Health, Kitakyushu, Japan
In this article, we implemented a regression model and conducted experiments for predicting disease activity using data from 1929 rheumatoid arthritis patients to assist in the selection of biologics for rheumatoid arthritis. On modelling, the missing variables in the data were completed by three different methods, mean value, self-organizing map and random value. Experimental results showed that the prediction error of the regression model was large regardless of the missing completion method, making it difficult to predict the prognosis of rheumatoid arthritis patients.
Rheumatoid Arthritis, Gaussian Process Regression, Self-Organizing Map.
Aude Maignan1 and Tony Scott2, 1Laboratoire Jean Kuntzmann,700 avenue centrale, B. P. 53, 38041 Grenoble Cedex 9 France, 2Institut f¨ur Physikalische Chemie, RWTH-Aachen University, 52056 Aachen, Germany
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a s value, a hyper-parameter which can be manually defined and manipulated to suit the application. Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an outstanding task because normally such expressions are impossible to solve analytically. However, we prove that if the points are all included in a square region of size s, there is only one minimum. This bound is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new numerical approach “per block”. This technique decreases the number of particles by approximating some groups of particles to weighted particles. These findings are not only useful to the quantum clustering problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics and other applications.
Data clustering, Quantum clustering, exponential polynomial.
Youming Zhang, Ruofei Zhu, Zhengzhou Zhu*, Qun Guo, Lei Pang, School of Software and Microelectronics, Peking University
The problem of Click-through rate(CTR) prediction is the core issue to many real-world applications such as online advertising and recommendation systems. An effective prediction relies on high-order combinatorial features, which are often hand-crafted by experts. Limited by human experience and high implementation costs, combinatorial features cannot be manually captured thoroughly and comprehensively. There have been efforts in improving hand-crafted features automatically by designing feature-generating models such as FMs, DCN and so on. Despite the great success of these structures, most of the existing models cannot differentiate the high-quality feature interactions from the huge amount of useless feature interactions, which can easily impair their performance. In this paper, we propose a Higher-Order Attentional Network(HOAN) with the purpose of selecting high-quality combinatorial features. HOAN is a hierarchical structure, the multiple crossing layers can learn feature interactions of any order in an end-to-end manner. Inside the crossing layer, each interaction item has its unique weight with consideration of global information to eliminate useless features and select high-quality features. In addition, HOAN also maintains integrity of individual feature embedding and offers interpretive feedback to calculating process. Further more, we combine DNN and HOAN, proposing a Deep & Attentional Crossing Network (DACN) to comprehensively model feature interactions from different perspectives. Experiments on sufficient real- world data show that HOAN and DACN outperforms state-of-the-art models. The code is available in https://github.com/meRacle-19/HighOrderAttention.
Click-through rate prediction, Feature interaction networks, Attention mechanism, Hybrid model
Alla M. Eid, Amgad A. Salama, and Hassan M. Elkamchouchi, The authors are with electrical Engineering Department, Faculty of Engineering, Alexandria University, Egypt
In this paper, a novel [1x2] traveling-wave slotted spiral antenna array based on substrate integrated waveguide (SIW) at Ku band has been illustrated. The proposed array characterized by low profile, wide axial ratio bandwidth (ARBW), and wide return loss bandwidth (RLBW). To get wide RLBW, wide ARBW as well as maximum directivity, two arms archemedian spiral shape has been proposed and investigated. The fabrication results of the two arms archemedian spiral shape elements were as follows, RLBW more than 18%, 12% boresight ARBW. A [1x2] has been designed and fabricated, in which a good agreement between The measurement results and the simulated one. The measurement results accomplished about 28% ,9.4% RLBW, and ARBW, respectively. More, the directivity of the proposed antenna array is 9 dBi.
slotted antenna, spiral antenna, SIW ANTENNA, two arms archimedean antenna.
Iván Humberto Fuentes Chab1, Damián Uriel Rosado Castellanos1, Olivia Graciela Fragoso Diaz2 and Ivette Stephany Pacheco Farfán1, 1Department of Computer Systems Engineering, Higher Technological Institute of Escárcega, Campeche, México, 2Computer Science Department, National Center for Technological Research and Development (CENIDET), Cuernavaca, México
A serious videogame is a practical and simple way to get the student to learn about a complex subject, such as performing integrals, applying first aid, or even getting children to learn to read and write in their native language or another language. Therefore, to develop a serious videogame, you must have a guide containing the basic or necessary elements of its software components to be considered. This document presents a quality model to evaluate the playability, taking the attributes of usability and understandability at the level of software components. This model can serve as parameters to measure the quality of the software product of the Serious Videogames before and during its development, providing a margin with the primordial elements that a Serious Videogame must have so that the players reach the desired objective of learning while playing.
Quality Model, Serious Videogames, Playability Metrics.
Xiang Yu, Fuping Chu*, Junqi Wu, Bo Huang, Sina Weibo Inc
The recommendation system is an important commercial application of machine learning, where billions of feed views in the information flow every day. In reality, the interaction between user and item usually makes user's interest changing over time, thus many companies (e.g. ByteDance, Baidu, Alibaba, and Weibo) employ online learning as an effective way to quickly capture user interests. However, hundreds of billions of model parameters present online learning with challenges for real-time model deployment. Besides, model stability is another key point for online learning. To this end, we design and implement a symmetric fusion online learning system framework called WeiPS, which integrates model training and model inference. Specifically, WeiPS carries out second level model deployment by streaming update mechanism to satisfy the consistency requirement. Moreover, it uses multi-level fault tolerance and real-time domino degradation to achieve high availability requirement.
Machine Learning, Large-scale Data, Real-time Deploy, Model Stability.
Mr. Mahabaleshwar Kabbur and Dr. V. Arul Kumar, School of Computer Science & Applications REVA University, Bengaluru-64, Karnataka, India
Vehicular Ad-hoc network (VANET) has gained huge attraction from research community due to their significant nature of providing the autonomous vehicular communication. The efficient communication is considered as prime concern in these networks however, several techniques have been introduced to improve the overall communication of VANETs. Security and privacy are also considered as prime aspects of VANETs. Maintaining data security and privacy is highly dynamic VANETs is a challenging task. Several techniques have been introduced recently which are based on the cryptography and key exchange. However, these techniques provide solution to limited security threats. Hence, in this work we introduce a novel approach for key management and distribution in VANET to provide the security to the network and its components. Later, we incorporated cryptography approach to secure the data packets. Hence, the proposed approach is named as Secure Group Key Management and Cryptography (SGKC). The experimental study shows significant improvements in the network performance.
Network Protocols, Wireless Network, Mobile Network, Virus, Security, Attcks.
Jianwei Li, Qingqing Gangstar and Xiaoming Wang, Jinan University, Department of Information science and technology, Guangzhou, China
Searchable encryption (SE) allows client to outsource personal data to an untrusted server while protecting the data privacy, which is widely used by corporations and individuals. Recently, many works show that forward privacy is the fundamental property for secure SE cryptosystem, and several forward secure SE schemes have been proposed. However, most forward secure schemes are focus on the single-keyword query, which limit their wide application in cloud computing. In this paper, we propose an efficient forward secure searchable encryption scheme supporting multi-keyword query. Our scheme involves two new storage structures as temporary map and search tree. Specifically, when the client uploads multiple files with the same keyword, our scheme uses an encryption key to replace multiple encryption keys in the existing schemes, so that improves query efficiency. Moreover, our scheme overcomes the shortcoming that the query complexity increases linearly with the number of updated files in the existing schemes. We prove our scheme is secure with forward privacy under the random oracle model. The experimental results show that our scheme is more efficient than the existing schemes.
Cloud computing, Searchable encryption, Data outsourcing, Forward secure, Multi-keyword.
KHLOUD ALMUTAIRI1 Ahmed Ismail2 Samir Abdlerazek1, Hazem Elbakry1, 1Information Systems Department, Faculty of Computers and Information, Mansoura University, Egypt, 2Research and development dep., GlaxyTech, Munich, Germany
Healthcare is an important aspect of human lives and anecessity to be provided to all members. With the advancement of technology, mobile applications are omnipresent.Mobile applications presence has increased in many foldsand is common in smartphones, tablets, and PDA’s. Healthcare solutions based on IoT, big data, and machine learning are one of the hottest research topics nowadays. In the healthcare area, there are many available solutions based on data science, big data analysis, IoT connections, machine learning, and data mining techniques. Machine learning techniques such as SVM, DTW, and others are presented. This paper introduces a background of recent healthcare solutions using the classification method. Some of them may use one technique, but others can make a combination of two techniques.
Wearables, Health data, MIoT, smart devices, E-health.
Niklas Hageback, The Virtual Mind Stockholm, Sweden
The automation of human reasoning remains a challenge, testified by the many but so far unsuccessful attempts to develop machine generated human thought patterns. The author argues that the main reason to these failures lie not in poor employment and calibration of various AI techniques, such as machine learning, but a faulty understanding within the AI community of what human reasoning really is. At its core lies the expectation of what we want an automated human reasoning tool to do, is it to replicate the mind of average Joe (or his smarter cousin), or is it the creation of a superintelligence that by far surpasses the capacity of human intelligence? This paper seeks to highlight some of these misconceptions, from a philosophical and psychological perspective, and to outline a model how we humans actuallyreason, which forms a theoretical foundation for an automated human reasoning architecture.
Özgür UGUR1, Merve AGIRBAS1, Kutay DEMIRÖREN2, Melih GÜLÇAKIR2, Günkut ORHAN1, Güner MUTLU1, Murat Can Ganiz3, 1Ar-Ge Departmani, ENKA Systems Yazilim A.S., 2Tedarik Zinciri Departmani, ENKA Insaat ve Sanayi A.S., 3Bilgisayar Mühendisligi Bölümü, Marmara Üniversitesi
Smart management system aims to develop an artificial intelligence-based system to manage the titles and budgets of large projects and to follow the procurement processes specific to the construction sector. In this study, we present machine learning-based material classification models that we have developed to support purchasing mechanisms and minimize the possibility of human error and abuse. Artificial intelligence models, by entering the name or description of the material as text in material purchases, the most suitable code or codes are automatically determined from a very large and complex standard code system and suggestions are offered to the user. Thus, by ensuring the correct classification of the materials to be procured, the supply management will be supported in a correct and rapid manner. At the same time, losses due to classification errors will be reduced.
Machine Learning, Procurement Management, Text Classification