abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
In the field of automatic face recognition, transformations of facial features due to aging cause a problem. Due to small amounts of extracted features, the identity verification can be difficult. The feature-based methods that are present in the literature are still being developed, with unsatisfactory results caused by high rates of false matching. In this paper we present a new method of matching verification of SIFT extracted feature points, which uses both the positions and scales of feature points. By using this method and the SIFT descriptor, we develop an identity verification system robust to aged based facial features transformations. The application of our verification system in the FG-net database demonstrates our approach's performance. The experimental results show that if 16.66% false acceptance rate is admitted, 81.81% true matching rate is obtained.
['Imad Mohamed Ouloul', 'Karim Afdel', 'Abdellah Amghar', 'Zakaria Moutakki']
Automatic face recognition with aging using the invariant features
920,519
Karyotyping, or visually examining and recording chromosomal abnormalities, is commonly used to diagnose and treat disease. Karyotypes are written in the International System for Human Cytogenetic Nomenclature (ISCN), a computationally non-readable language that precludes full analysis of these genomic data. In response, we developed a cytogenetic platform that transfers the ISCN karyotypes to a machine-readable model available for computational analysis. Here we use cytogenetic data from the National Cancer Institute (NCI)-curated Mitelman database1 to create a structured karyotype language. Then, drug-gene-disease triplets are generated via a computational pipeline connecting public drug-gene interaction data sources to identify potential drug repurposing opportunities.
['Zachary Abrams', 'Andrea L. Peabody', 'Nyla A. Heerema', 'Philip R. O. Payne']
Text Mining and Data Modeling of Karyotypes to aid in Drug Repurposing Efforts.
795,517
Hardware implementation of conventional interval type-2 neuro-fuzzy systems with on-chip learning is essential for real time applications. However, existing implementations are resource consuming due to the complexity of their architectures and the use of iterative procedure for system output estimation. To overcome this problem, we propose a new interval type-2 neuro-fuzzy architecture. Accordingly, the number of layers is reduced owing to using Beta membership functions. Moreover, a simplified output computing operation is applied. For implementing Beta functions, an accurate and compact Centered Recursive Interpolation (CRI) method is used. For on-chip learning system, a new on-line incremental learning algorithm with gradient descent technique is applied to adjust its parameters. Furthermore, a synthesis of the corresponding design on a Field Programmable Gate Array (FPGA) platform is achieved in image denoising application. Performances comparison with the existing implementations shows the effectiveness of our chip in terms of resource requirements, speed and denoising performances.
['Manel Elloumi', 'Mohamed Krid', 'Dorra Sellami Masmoudi']
FPGA implementation of a new interval type-2 Beta neuro-fuzzy system with on-chip learning for image denoising application
824,474
['Reuven Bar-Yehuda', 'Gilad Kutiel', 'Dror Rawitz']
1.5-Approximation Algorithm for the 2-Convex Recoloring Problem
811,774
['Rajkumar Buyya']
A status report on IEEE Transactions on Cloud Computing
767,263
We adopt a previously developed model of deep syntactic and semantic processing to support question answering for Bahasa Indonesia, and extend it by adding a number of axioms designed to encode useful knowledge for answering questions, thus increasing the inferential power of the QA system. We believe this approach can increase the robustness of semantic analysis-based QA systems, whilst simultaneously lightening the burden of complexity in designing semantic attachment rules that transduce logical forms from syntactic structures. We show how these added axioms enable the system to answer questions which previously could not have been answered.
['Rahmad Mahendra', 'Septina Dian Larasati', 'Ruli Manurung']
Extending an Indonesian Semantic Analysis-based Question Answering System with Linguistic and World Knowledge Axioms
458,853
In the original Gilbert model of random geometric graphs, nodes are placed according to a Poisson process, and links formed between those within a xed range. Motivated by wireless network applications \soft" or \probabilistic" connection models have recently been introduced, involving a \connection function" H(r) that gives the probability that two nodes at distance r directly connect. In many applications, not only in wireless networks, it is desirable that the graph is fully connected, that is every node is connected to every other node in a multihop fashion. Here, the full connection probability of a dense network in a convex polygonal or polyhedral domain is expressed in terms of contributions from boundary components, for a very general class of connection functions. It turns out that only a few quantities such as moments of the connection function appear. Good agreement is found with connection functions used in previous studies and with numerical simulations.
['Carl P. Dettmann', 'Orestis Georgiou']
Connectivity of networks with general connection functions
557,800
The University of Evora participation in QA@CLEF-2007 was based on the Senso question answer system. This system uses an ontology with semantic information to support some operations. The full text collection is indexed and for each question a search is performed for documents that may have one answer. There is an ad-hoc module and a logic-programming based module that look for answers. The solution with the highest weight is then returned. The results indicate that the system is more suitable for the definition question type.
['José Saias', 'Paulo Quaresma']
The University of Évora's Participation in QA@CLEF-2007
555,787
This study aims to explore the use of technology in the classroom for teaching purposes in a university environment. The research provides evidence both on the use of the technological tool and the instructive task performed with it. Based on a survey of 112 professors, the tools were classified into items of mobile and social media (SM), learning management systems (LMS) and graphic and dynamic visualizations (Graphic). Regarding the Faculty, the study collected a number of sociodemographic variables such as gender, age, discipline, academic background and experience in online teaching. Results show the limited use of technology in classroom, highlighting only those technologies related with graphic and dynamic presentations, while they do show correlation between the use of LMS tools and SM tools. However, significant differences in the use of technology in the classroom have been found between teachers from different disciplines, academic backgrounds and prior experience with online teaching, while no differences were detected in other variables. This investigation thus contributes to studies that aim to advance the effective integration of technology in teaching and learning by introducing relevant teacher variables.
['Albert Cubeles', 'David Riu']
Teachers' use of technology in the university classroom
955,566
This paper presents a novel technique for improving face recognition performance by predicting system failure, and, if necessary, perturbing eye coordinate inputs and repredicting failure as a means of selecting the optimal perturbation for correct classification. This relies on a method that can accurately identify patterns that can lead to more accurate classification, without modifying the classification algorithm itself. To this end, a neural network is used to learn 'good' and 'bad' wavelet transforms of similarity score distributions from an analysis of the gallery. In production, face images with a high likelihood of having been incorrectly matched are reprocessed using perturbed eye coordinate inputs, and the best results used to “correct” the initial results. The overall approach suggest a more general approach involving the use of input perturbations for increasing classifier performance in general. Results for both commercial and research face-based biometrics are presented using both simulated and real data. The statistically significant results show the strong potential for this to improve system performance, especially with uncooperative subjects.
['Terry P. Riopka', 'Terrance E. Boult']
Classification enhancement via biometric pattern perturbation
204,723
In smart building, services are stored as rules and achieved by rule analyzing and executing. However, the irrational contents of rules and conflicts between rules may bring confusion and maloperation in the rule system. In this paper, we propose a lightweight rule verification and resolution framework to solve the problem which provides content anomaly detection and rule conflict detection. We also provide a quick resolution strategy for rule conflicts based on conflict-scenario-analysis so as to guarantee the rule system performing appropriately.
['Brandon Kyle Hamilton', 'M.R. Inggs', 'Hayden Kwok Hay So']
Scheduling Mixed-Architecture Processes in Tightly Coupled FPGA-CPU Reconfigurable Computers
214,556
We explore an optical network architecture which employs dense wavelength division multiplexing (WDM) technology and passive waveguide grating routers (WGRs) to establish a virtual topology based on lightpath communication. We examine the motivation and the technical challenges involved in this approach, propose and examine the characteristics of a network design algorithm, and provide some illustrative performance results.
['Dhritiman Banerjee', 'Jeremy Frank', 'Biswanath Mukherjee']
Passive optical network architecture based on waveguide grating routing
127,803
One of the most famous algorithmic meta-theorems states that every graph property that can be defined by a sentence in counting monadic second order logic (CMSOL) can be checked in linear time for graphs of bounded treewidth, which is known as Courcelle's Theorem. These algorithms are constructed as finite state tree automata, and hence every CMSOL-definable graph property is recognizable. Courcelle also conjectured that the converse holds, i.e., every recognizable graph property is definable in CMSOL for graphs of bounded treewidth. We prove this conjecture for k-outerplanar graphs, which are known to have treewidth at most 3k-1.
['Lars Jaffke', 'Hans L. Bodlaender']
Definability equals recognizability for $k$-outerplanar graphs
591,672
We consider the problem of communication over a network containing a hidden and malicious adversary that can control a subset of network resources, and aims to disrupt communications. We focus on omniscient node-based adversary, i.e., the adversary can control a subset of nodes, and knows the message, network code and packets on all links. Characterizing information-theoretically optimal communication rates as a function of network parameters and bounds on the adversarially controlled network is in general open, even for unicast (single source, single destination) problems. In this work we characterize the information-theoretically optimal randomized capacity of such problems, i.e., under the assumption that the source node shares (an asymptotically negligible amount of) independent common randomness with each network node a priori. We propose a novel computationally-efficient communication scheme whose rate matches a natural information-theoretically “erasure outer bound” on the optimal rate. Our schemes require no prior knowledge of network topology, and can be implemented in a distributed manner as an overlay on top of classical distributed linear network coding.
['Peida Tian', 'Sidharth Jaggi', 'Mayank Bakshi', 'Oliver Kosut']
Arbitrarily varying networks: Capacity-achieving computationally efficient codes
723,553
A decimal notation satisfies many simple mathematical properties and it is a useful tool in the analysis of trees. A practical method is presented that compresses the decimal codes while maintaining the fast determination of relations (e.g. ancestor, descendant, brother, etc.). A special node, called a kernel node, including many common subcodes of the other codes, is defined and a compact data structure is presented using the kernel nodes. For the case where n(m) is the number of the total (kernel) nodes, it is proved that encoding a decimal code is a constant time, that the worst-case time complexity of compressing the decimal codes is O(n+m/sup 2/), and that the size of the data structure is proportional to m. From the experimental results for some hierarchical semantic primitives for natural language processing, it is shown that the ratio m/n is extremely small, ranging from 0.047 to 0.13. >
['Jun-ichi Aoe']
An efficient algorithm of compressing decimal notations for tree structures
148,397
['Anneli Heimbürger', 'Jari Multisilta', 'Kai Ojansuu']
Time Contexts in Document-Driven Projects on the Web: From Time-Sensitive Links towards an Ontology of Time.
554,948
Human's judgment has been shown to be thin-sliced in nature, i.e., accurate perception can often be achieved for a short duration of exposure to expressive behaviors. In this work, we develop a mutual information-based framework to select the most emotion-rich 20% of local multimodal behavior segments within a 3-minute long affective dyadic interaction in the USC CreativeIT database. We obtain a prediction accuracy of 0.597, 0.728, and 0.772 (measured by Spearman correlation) for an actor's global (session-level) emotion attributes (activation, dominance, and valence) using Fisher-vector encoding and support vector regression built on these 20% of multimodal emotion-rich behavior segments. Our framework achieves a better accuracy over using the interaction in its entirety and a variety of other data selection baseline methods by a significant margin. Furthermore, our analysis indicates that the highest prediction accuracy can be obtained using only 20%–30% of data within each session, i.e., additional evidences for the thin-slice nature of affect perception.
['Wei-Cheng Lin', 'Chi-Chun Lee']
A thin-slice perception of emotion? An information theoretic-based framework to identify locally emotion-rich behavior segments for global affect recognition
802,847
Environmental problems are complex in nature and are usually exacerbated by unsustainable trends that are driven by anthropogenic, economic and development factors. One of the key challenges of sustainability is the ability to examine the range of possible future paths of combined social and environmental conditions, under consideration of human perceptions, uncertainty and cause-effect factors, with possible feedback. Moreover, policy formulation and evaluation need to be addressed in an integrated and holistic way, where scoping goes beyond specific sectorial interests and includes multi-disciplinary knowledge, involving stakeholders from diverse backgrounds and with different objectives and aspirations. In this work we describe FCMp, a Fuzzy Cognitive Mapping methodology to help experts and lay experts develop, simulate and assess the impact of different policy alternatives on key variables in a particular environmental problem according to multiple perceptions. The FCMp allows for: 1) integration of cross-policy themes by modeling socio-economic and ecological indicators with possible feedback between them, 2) representation of qualitative knowledge using linguistic terms, and 3) stakeholder participation through scenario development and analysis. The FCMp uses “what-if” scenario analysis in order to derive future implications from current states. We demonstrate the usefulness of FCMp using a case study to elaborate and compare scenarios for a specific environmental issue.
['Asmaa Mourhir', 'Tajjeeddine Rachidi', 'Mohammed Karim']
Employing Fuzzy Cognitive Maps to support environmental policy development
551,255
The purpose of this paper is to present a comparative study on the performance of altered fingerprint detection algorithms. Different algorithms from different institutions have been evaluated on two different datasets. Both datasets feature real alterations on fingers and the ground truth regarding the alteration is known a priori, as, in some cases, corresponding pre-altered fingerprints were also available. The performance obtained on both datasets produced by either reference state-of-the-art or custom-built algorithms is higher than the reported 10% EER from previous studies [1].
['Rudolf Haraksim', 'Alexandre Anthonioz', 'Christophe Champod', 'Martin Olsen', 'John Ellingsgaard', 'Busch Christophe']
Altered fingerprint detection – algorithm performance evaluation
697,888
This research paper describes the design and prototyping of a simulation tool that provides a platform for studying how behavior of proteins in the cell membrane influences macro-level, emergent behaviors of cells. Whereas most current simulation tools model cells as homogeneous objects, this new tool is designed to modularly represent the cell's complex morphology and the varying distribution of proteins across the membrane. The simulation tool uses a physics engine to manage motion and collisions between objects. It also represents dynamic fluid environments, experimental surfaces, attachment bonds and interactions between the dynamically changing cell surface proteins. The prototype tool is described along with proposals for its use and further development.
['Terri Applewhite-Grosso', 'Nancy D. Griffeth', 'Elisa Lannon', 'Uchenna Unachukwu', 'Stephen Redenti', 'Naralys Batista']
A multi-scale, physics engine-based simulation of cellular migration
653,084
Efficient OFDM transmission has been limited by power amplifier (PA) non-linearity combined with OFDM's high peak-to-average power ratio (PAPR). We show that minimization of the PAPR of an OFDM signal, subject to constraints on allowable constellation error and out-of-band noise, can be formulated as a convex optimization problem. The globally optimal solution can be calculated with low complexity using known algorithms. A system model is proposed for transmitting OFDM signals with maximum power-efficiency regardless of PA linearity. No change in receiver structure is required. Simulation results are presented for the 802.11a WLAN standard.
['Alok Aggarwal', 'Teresa H. Meng']
Minimizing the peak-to-average power ratio of OFDM signals via convex optimization
227,720
The design of variable fractional delay filter is investigated. First, the discrete Fourier transform (DFT) interpolation approach is described. Then, the Taylor series expansion and DFT interpolation formula are used to design the variable fractional delay filter which can be implemented by using Farrow structure. Finally, numerical comparison with conventional Lagrange-type variable fractional delay filter is made to demonstrate the effectiveness of this new design approach.
['Chien-Cheng Tseng', 'Su-Ling Lee']
Closed-form design of variable fractional delay filter using discrete Fourier transform
251,128
['Fatemeh Raazaghi']
Auto-FAQ-Gen: Automatic Frequently Asked Questions Generation
621,578
In this paper, we consider a probabilistic model for real-time task systems with probabilistic worst-case execution times, probabilistic minimum inter-arrival times and probabilistic deadlines. We propose an analysis computing response time distributions of the tasks scheduled on one processor under a task-level fixed-priority preemptive scheduling policy. The complexity of our method is analyzed and it is improved by re-sampling techniques on worst-case execution time distributions and/or minimal inter-arrival time distributions. The improvements are shown through experimental results. Also, experiments are conducted in order to investigate the improvement obtained by using a probabilistic model in terms of precision and schedulability gained as opposed to a deterministic worst-case reasoning.
['Dorin Maxim', 'Liliana Cucu-Grosjean']
Response Time Analysis for Fixed-Priority Tasks with Multiple Probabilistic Parameters
280,328
From September 21st to September 26th 2008, the Dagstuhl Seminar 08391 ``Social Web Communities'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented#R##N#their current research, and ongoing work and open problems were#R##N#discussed. Abstracts of the presentations given during the seminar as#R##N#well as abstracts of seminar results and ideas are put together in#R##N#this paper. The first section describes the seminar topics and goals#R##N#in general. Links to extended abstracts or full papers are provided,#R##N#if available.
['Harith Alani', 'Steffen Staab', 'Gerd Stumme']
08391 Abstracts Collection - Social Web Communities.
762,341
The workflow programming paradigm has had a representative growth in the last years. This model is useful to represent flows of control and facilitate the complexity management of processes that have multiple dependent tasks. With the emergence of e-Science, workflow is becoming a standard for management of scientific processes with massive data sets. Within the workflow execution, scheduling of tasks is primordial to provide efficiency and to speed up the process results arrival. In this paper we consider the execution environment as being a computational grid, which is dynamic, non-dedicated, and has heterogeneous resources. We present a strategy for scheduling dependent task processes, dealing with scheduling and execution of more than one process at the same time potentially using resources in common. The algorithm is dynamic and adaptive, rescheduling tasks that are on the queue of resources not presenting good performance. Simulations show that the proposed strategy can give better schedules by enhancing the resources usage.
['Luiz F. Bittencourt', 'Edmundo Roberto Mauro Madeira']
Fulfilling Task Dependence Gaps for Workflow Scheduling on Grids
398,143
['Laura Po']
Automatic Lexical Annotation Applied to the SCARLET Ontology Matcher (Extended Abstract).
765,790
Parasitic capacitance in test hardware can affect the performance of a test and lead to poor fault coverage and/or yield loss. In an ATE setup, characterizing the stray capacitance using external instruments is difficult for practical reasons. In this paper, we present a single probe technique that uses available tester resources to measure stray capacitance of test hardware with high accuracy and precision. The proposed method uses a time measurement sub-system and a current source of the ATE for measuring stray capacitance from their charging and discharging characteristics. This capacitance measurement technique is also used to detect and diagnose faults in different tester hardware components. Measurement results and case studies on the application of this technique are presented.
['A. Haider', 'P. Variyam', 'Abhijit Chatterjee', 'J. Ridley']
Measuring stray capacitance on tester hardware
462,150
['Wei Yang', 'Jiarong Shi', 'Xiuyun Zheng', 'Yongfeng Pang']
Hesitant interval-valued intuitionistic fuzzy linguistic sets and their applications
953,300
In this paper, we consider Distributed Estimation (DES) in a Wireless Sensor Network (WSN) and assume that the number of sensors in the WSN is larger than the available number of transmission slots. With classic DES, the sensors independently transmit the sampled digitized data. However, the WSN is an uplink multiuser channel where multiple sources share the channel for communicating data to a Fusion Center (FC). To this aim, we adopt the optimal communication scheme for this setup that suggests interfering transmissions and the use of Successive Interference Cancelation (SIC) at the FC. We propose a joint SIC decoder and linear Minimum-Mean-Square-Error (MMSE) estimator for digital interfering transmission of correlated data. We further introduce an optimization framework that schedules and allocates power to the sensors optimally. We formulate the problem in two ways: an expected distortion minimization problem under a total power budget, and a transmission power minimization problem under a distortion constraint. For both cases, we consider the system performance under different operating conditions, and we demonstrate the efficiency of the proposed scheme compared to a system that employs optimized sensor selection under orthogonal transmissions.
['Antonios Argyriou', 'Ozgu Alay']
Distributed Estimation in Wireless Sensor Networks With an Interference Canceling Fusion Center
679,889
We apply novel utility-based scheduling schemes to uplink single carrier frequency division multiple access (SC-FDMA) systems. Two utility functions are used for managing two dimensional resources (time and frequency): user data rate for maximizing system capacity and logarithmic user data rate for proportional fairness. To develop utility-based scheduling algorithms, we revise channel-dependent scheduling (CDS) schemes derived in our previous work (J. Lim et al.). The results show that proportional fair scheduling with logarithmic user data rate can improve the rate-sum capacity up to 100% for localized FDMA and 30% for interleaved FDMA, with the capacity gains equally shared among all users
['Junsung Lim', 'Hyung G. Myung', 'Kyungjin Oh', 'David J. Goodman']
PRoportional Fair Scheduling of Uplink Single-Carrier FDMA Systems
241,377
['Chelsea C. White']
Review of "Dynamic Programming and Stochastic Control" by Dimitri P. Bertsekas.
764,304
['Yuan Kong', 'Xiaolong Shi', 'Jinbang Xu', 'Xinquan Huang']
Reversible Spiking Neural P Systems with Astrocytes.
785,945
Self-organizing network (SON) technology aims at autonomously deploying, optimizing and repairing radio access networks (RANs). SON algorithms typically use key performance indicators (KPIs) from the RAN. It is shown that in certain cases, it is essential to take into account the impact of the backhaul state in the design of the SON algorithm. We revisit the base station (BS) load definition taking into account the backhaul state. We provide an analytical formula for the load along with a simple estimator for both elastic and guaranteed bit-rate (GBR) traffic. We incorporate the proposed load estimator in a self-optimized load balancing (LB) algorithm. Simulation results for a backhaul constrained heterogeneous network illustrate how the correct load definition can guarantee a proper operation of the SON algorithm.
['Abdoulaye Tall', 'Zwi Altman', 'Eitan Altman']
Self-Optimizing Load Balancing With Backhaul-Constrained Radio Access Networks
582,914
In this paper, we present DeFer --- a fast, high-quality and nonstochastic fixed-outline floorplanning algorithm. DeFer generates a non-slicing floorplan by compacting a slicing floorplan. To find a good slicing floorplan, instead of searching through numerous slicing trees by simulated annealing as in traditional approaches, DeFer considers only one single slicing tree. However, we generalize the notion of slicing tree based on the principle of Deferred Decision Making (DDM). When two subfloorplans are combined at each node of the generalized slicing tree, DeFer does not specify their orientations, the left-right/top-bottom order between them, and the slice line direction. DeFer even does not specify the slicing tree structures for small subfloorplans. In other words, we are deferring the decisions on these factors, which are specified arbitrarily at an early step in traditional approaches. Because of DDM , one slicing tree actually corresponds to a huge number of slicing floorplan solutions, all of which are efficiently kept in one single shape curve. With the final shape curve, it is straightforward to choose a good floorplan fitting into the fixed outline. Several techniques are also proposed to further optimize the wirelength. Experimental results on benchmarks with only hard blocks and with both hard and soft blocks show that DeFer achieves the best success rate, the best wirelength and the best runtime on average compared with other state-of-the-art floorplanners.
['Jackey Z. Yan', 'Chris Chu']
DeFer: deferred decision making enabled fixed-outline floorplanner
239,062
['Adam Shwartz', 'Balakrishna J. Prabhu', 'Gregory Miller', 'Konstantin Avrachenkov', 'Eitan Altman', 'Ishai Menache']
Dynamic Discrete Power Control in Cellular Networks
593,961
One of the problems of the development of document indexing and retrieval applications is the usage of hierarchies. In this paper we describe a method of automatic hierarchical indexing using the traditional relational data model. The main idea is to assign continuous numbers to the words (grammatical forms of the words) that characterize the nodes in the hierarchy (concept tree). One of the advantages of the proposed scheme is its simplicity. The system that implements such indexing scheme is described.
['Alexander F. Gelbukh', 'Grigori Sidorov', 'Adolfo Guzmán-Arenas']
Relational Data Model in Document Hierarchical Indexing
396,942
To provide students with the opportunity to synthesize the knowledge and skills acquired from their prior courses into one final project, IT capstone projects have become an essential part of the IT curriculum. This paper presents the successes and challenges from the student groups with post-project survey and student self reflection. Effective communication, strong leadership, and match-up of individual strengths and team roles emerged as the major factors contributing to the team success. The challenges include the size of the project team, the limited time to complete a comprehensive IT project, and the amount of effort on documentation. The intent of this paper is to have an in-depth understanding of the IT capstone projects to discover better approaches to enhancing student learning experience and improving teaching effectiveness in such capstone project courses in the future.
['Chi Zhang', 'Ju An Wang']
Effects of communication, leadership, and team performance on successful IT capstone projects: a case study
178,694
In this paper, we target at four specific recommendation tasks in the academic environment: the recommendation for author coauthorships, paper citation recommendation for authors, paper citation recommendation for papers, and publishing venue recommendation for author-paper pairs. Different from previous work which tackles each of these tasks separately while neglecting their mutual effect and connection, we propose a joint multi-relational model that can exploit the latent correlation between relations and solve several tasks in a unified way. Moreover, for better ranking purpose, we extend the work maximizing MAP over one single tensor, and make it applicable to maximize MAP over multiple matrices and tensors. Experiments conducted over two real world data sets demonstrate the effectiveness of our model: 1) improved performance can be achieved with joint modeling over multiple relations; 2) our model can outperform three state-of-the art algorithms for several tasks.
['Zaihan Yang', 'Dawei Yin', 'Brian D. Davison']
Recommendation in Academia: A joint multi-relational model
96,240
The calibration of the Soil Moisture and Ocean Salinity (SMOS) payload instrument, known as Microwave Imaging Radiometer by Aperture Synthesis (MIRAS), is based on characterization measurements which are performed initially on-ground prior to launch and, subsequently, in-flight. A good calibration is a prerequisite to ensure the quality of the geophysical data. The calibration scheme encompasses both the spaceborne instrument and the ground data processing. Once the system has been calibrated, the instrument performance can be verified, and the higher level geophysical variables, soil moisture and ocean salinity, can be validated. In this paper, the overall calibration approach is presented, focusing on the main aspects relevant to the SMOS instrument design and mission requirements. The distributed instrument, comprising 72 receivers, leads to a distributed internal calibration approach supported by specific external calibration measurements. The relationship between the calibration data and the routine ground processing is summarized, demonstrating the inherent link between them. Finally, the approach to the in-flight commissioning activities is discussed.
['M.A. Brown', 'F. Torres', 'Ignasi Corbella', 'Andreas Colliander']
SMOS Calibration
666,312
Fusion of the functional image with an anatomical image provides additional diagnostic information. It is widely used in diagnosis, treatment planning, and follow-up of oncology. Functional image is a low-resolution pseudo color image representing the uptake of radioactive tracer that gives the important metabolic information. Whereas, anatomical image is a high-resolution gray scale image that gives structural details. Fused image should consist of all the anatomical details without any changes in the functional content. This is achieved through fusion in de-correlated color model and the choice of color model has greater impact on the fusion outcome. In the present work, suitability of different color models for functional and anatomical image fusion is studied. After converting the functional image into de-correlated color model, the achromatic component of functional image is fused with an anatomical image by using proposed nonsubsampled shearlet transform (NSST) based image fusion algorithm to get new achromatic component with all the anatomical details. This new achromatic and original chromatic channels of functional image are converted to RGB format to get fused functional and anatomical image. Fusion is performed in different color models. Different cases of SPECT-MRI images are used for this color model study. Based on visual and quantitative analysis of fused images, the best color model for the stated purpose is determined.
['Padma Ganasala', 'Vinod Kumar', 'A. D. Prasad']
Performance Evaluation of Color Models in the Fusion of Functional and Anatomical Images
697,018
The Choquet integral is a powerful aggregation operator which lists many well-known models as its special cases. We look at these special cases and provide their axiomatic analysis. In cases where an axiomatization has been previously given in the literature, we connect the existing results with the framework that we have developed. Next we turn to the question of learning, which is especially important for the practical applications of the model. So far, learning of the Choquet integral has been mostly confined to the learning of the capacity. Such an approach requires making a powerful assumption that all dimensions (e.g. criteria) are evaluated on the same scale, which is rarely justified in practice. Too often categorical data is given arbitrary numerical labels (e.g. AHP), and numerical data is considered cardinally and ordinally commensurate, sometimes after a simple normalization. Such approaches clearly lack scientific rigour, and yet they are commonly seen in all kinds of applications. We discuss the pros and cons of making such an assumption and look at the consequences which axiomatization uniqueness results have for the learning problems. Finally, we review some of the applications of the Choquet integral in decision analysis. Apart from MCDA, which is the main area of interest for our results, we also discuss how the model can be interpreted in the social choice context. We look in detail at the state-dependent utility, and show how comonotonicity, central to the previous axiomatizations, actually implies state-independency in the Choquet integral model. We also discuss the conditions required to have a meaningful state-dependent utility representation and show the novelty of our results compared to the previous methods of building state-dependent models.
['Mikhail Timonin']
Choquet integral in decision analysis - lessons from the axiomatization
948,394
Motivation: Microarrays are capable of determining the expression levels of thousands of genes simultaneously. In combination with classification methods, this technology can be useful to support clinical management decisions for individual patients, e.g. in oncology. The aim of this paper is to systematically benchmark the role of non-linear versus linear techniques and dimensionality reduction methods.#R##N##R##N#Results: A systematic benchmarking study is performed by comparing linear versions of standard classification and dimensionality reduction techniques with their non-linear versions based on non-linear kernel functions with a radial basis function (RBF) kernel. A total of 9 binary cancer classification problems, derived from 7 publicly available microarray datasets, and 20 randomizations of each problem are examined.#R##N##R##N#Conclusions: Three main conclusions can be formulated based on the performances on independent test sets. (1) When performing classification with least squares support vector machines (LS-SVMs) (without dimensionality reduction), RBF kernels can be used without risking too much overfitting. The results obtained with well-tuned RBF kernels are never worse and sometimes even statistically significantly better compared to results obtained with a linear kernel in terms of test set receiver operating characteristic and test set accuracy performances. (2) Even for classification with linear classifiers like LS-SVM with linear kernel, using regularization is very important. (3) When performing kernel principal component analysis (kernel PCA) before classification, using an RBF kernel for kernel PCA tends to result in overfitting, especially when using supervised feature selection. It has been observed that an optimal selection of a large number of features is often an indication for overfitting. Kernel PCA with linear kernel gives better results.#R##N##R##N#Availability: Matlab scripts are available on request.#R##N##R##N#Supplementary information: http://www.esat.kuleuven.ac.be/~npochet/Bioinformatics/
['Nathalie Pochet', 'Frank De Smet', 'Johan A. K. Suykens', 'Bart De Moor']
Systematic benchmarking of microarray data classification: assessing the role of non-linearity and dimensionality reduction
278,200
Regrasping must be performed whenever a robot's grasp of an object is not compatible with the pick-and-place operation the robot must perform. This paper presents a new approach to the problem of regrasping for a robot arm equipped with a parallel-jaw end-effector. The method employs an evaluated breadth-first search in the space of compatible regrasp operations taking into account several criteria rating about the grasp and placement quality. We pay particular attention to online computational efficiency. The presented system is the first planning system performing as much of offline computation as possible for solving the regrasp problem efficiently online.
['Frank Röhrdanz', 'Friedrich M. Wahl']
Generating and evaluating regrasp operations
450,432
Solutions of least squares support vector machines (LS-SVMs) are typically nonsparse. The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. Iterative retraining requires more intensive computations than training a single nonsparse LS-SVM. In this paper, we propose a new pruning algorithm for sparse LS-SVMs: the sequential minimal optimization (SMO) method is introduced into pruning process; in addition, instead of determining the pruning points by errors, we omit the data points that will introduce minimum changes to a dual objective function. This new criterion is computationally efficient. The effectiveness of the proposed method in terms of computational cost and classification accuracy is demonstrated by numerical experiments.
['Xiang-Yan Zeng', 'Xue-wen Chen']
SMO-based pruning methods for sparse least squares support vector machines
452,876
The proliferation of smartphones nowadays has enabled many crowd assisted applications including audio-based sensing. In such applications, detected sound sources are meaningless without location information. However, it is challenging to localize sound sources accurately in a crowd using only microphones integrated in smartphones without existing infrastructures, such as dedicated microphone sensor systems. The main reason is that a smartphone is a nondeterministic platform that produces large and unpredictable variance in data measurements. Most existing localization methods are deterministic algorithms that are ill suited or cannot be applied to sound source localization using only smartphones. In this paper, we propose a distributed localization scheme using nondeterministic algorithms. We use the multiple possible outcomes of nondeterministic algorithms to weed out the effect of outliers in data measurements and improve the accuracy of sound localization. We then proposed to optimize the cost function using least absolute deviations rather than ordinary least squares to lessen the influence of the outliers. To evaluate our proposal, we conduct a testbed experiment with a set of 16 Android devices and 9 sound sources. The experiment results show that our nondeterministic localization algorithm achieves a root mean square error (RMSE) of 1.19 m, which is close to the Cramer-Rao bound (0.8 m). Meanwhile, the best RMSE of compared deterministic algorithms is 2.64 m.
['Duc V. Le', 'Jacob W. Kamminga', 'Hans Scholten', 'Paul J. M. Havinga']
Nondeterministic sound source localization with smartphones in crowdsensing
716,751
In this paper, we discuss the evaluation of blind audio source separation (BASS) algorithms. Depending on the exact application, different distortions can be allowed between an estimated source and the wanted true source. We consider four different sets of such allowed distortions, from time-invariant gains to time-varying filters. In each case, we decompose the estimated source into a true source part plus error terms corresponding to interferences, additive noise, and algorithmic artifacts. Then, we derive a global performance measure using an energy ratio, plus a separate performance measure for each error term. These measures are computed and discussed on the results of several BASS problems with various difficulty levels
['Emmanuel Vincent', 'Rémi Gribonval', 'Cédric Févotte']
Performance measurement in blind audio source separation
99,256
['Lydie Edward', 'Domitile Lourdeaux', 'Jean-Paul A. Barthès']
Simulation de comportements d'agents autonomes : Une architecture cognitive intégrant des facteurs physiques, physiologiques et de personnalité.
792,273
Physical viability, in particular energy efficiency, is a key challenge in realizing the true potential of Deep Neural Networks (DNNs). In this paper, we aim to incorporate the energy dimension as a design parameter in the higher-level hierarchy of DNN training and execution to optimize for the energy resources and constraints. We use energy characterization to bound the network size in accordance to the pertinent physical resources. An automated customization methodology is proposed to adaptively conform the DNN configurations to the underlying hardware characteristics while minimally affecting the inference accuracy. The key to our approach is a new context and resource aware projection of data to a lower-dimensional embedding by which learning the correlation between data samples requires significantly smaller number of neurons. We leverage the performance gain achieved as a result of the data projection to enable the training of different DNN architectures which can be aggregated together to further boost the inference accuracy. Accompanying APIs are provided to facilitate rapid prototyping of an arbitrary DNN application customized to the underlying platform. Proof-of-concept evaluations for deployment of different visual, audio, and smart-sensing benchmarks demonstrate up to 100-fold energy improvement compared to the prior-art DL solutions.
['Bita Darvish Rouhani', 'Azalia Mirhoseini', 'Farinaz Koushanfar']
DeLight: Adding Energy Dimension To Deep Neural Networks
849,698
Software security has become more and more critical as we are increasingly depending on the Internet, an untrustworthy computing environment. Software functionality and security are tightly related to each other, vulnerabilities due to design errors, inconsistencies, incompleteness, and missing constraints in system specifications can be wrongly exploited by security attacks. These two concerns, however, are often handled separately. In this paper we present a threat driven approach that improves on the quality of software through the realization of a more secure functional model. The approach introduces systematic transformation rules and integration steps for mapping attack tree representations into lower level dynamic behavior, then integrates this behavior into state chart-based functional models. Through the focus on both the functional and threat behavior, software engineers can introduce, clearly define and understand security concerns as software is designed. To identify vulnerabilities, our approach then applies security analysis and threat identification to the integrated model.
['Omar El Ariss', 'Jianfei Wu', 'Dianxiang Xu']
Towards an Enhanced Design Level Security: Integrating Attack Trees with Statecharts
523,103
Background#R##N#Detecting and visualizing nonlinear interaction effects of single nucleotide polymorphisms (SNPs) or epistatic interactions are important topics in bioinformatics since they play an important role in unraveling the mystery of “missing heritability”. However, related studies are almost limited to pairwise epistatic interactions due to their methodological and computational challenges.
['Junliang Shang', 'Yingxia Sun', 'Jin-Xing Liu', 'Junfeng Xia', 'Junying Zhang', 'Chun-Hou Zheng']
CINOEDV: a co-information based method for detecting and visualizing n-order epistatic interactions.
805,457
We propose a novel mixtures of Gaussian processes model in which the gating function is interconnected with a probabilistic logical model, in our case Markov logic networks. In this way, the resulting mixed graphical model, called Markov logic mixtures of Gaussian processes (MLxGP), solves joint Bayesian non-parametric regression and probabilistic relational inference tasks. In turn, MLxGP facilitates novel, interesting tasks such as regression based on logical constraints or drawing probabilistic logical conclusions about regression data, thus putting “machines reading regression data” in reach.
['Martin Schiegg', 'Marion Neumann', 'Kristian Kersting']
Markov Logic Mixtures of Gaussian Processes: Towards Machines Reading Regression Data
566,687
['Evelina Giacchi', 'Aurelio La Corte', 'Eleonora Di Pietro']
A Dynamic and Context-aware Model of Knowledge Transfer and Learning using a Decision Making Perspective
730,736
With the emergence of medical IoT devices, it becomes feasible to collect and analyze medical contexts to assess health conditions, i.e. personal healthcare services. Toilet is a common device which people use regularly almost every day. Hence, we develop a toilet-based personal healthcare system with medical IoT devices, Rainbow Toilet System. It is mainly used to collect medical contexts using IoT devices and analyze medical contexts to compute various health indexes.
['Moon Kwon Kim', 'Han Ter Jung', 'Soo Dong Kim', 'Hyun Jung La']
A Personal Health Index System with IoT Devices
966,684
The Atlanta Fire Rescue Department (AFRD), like many municipal fire departments, actively works to reduce fire risk by inspecting commercial properties for potential hazards and fire code violations. However, AFRD's fire inspection practices relied on tradition and intuition, with no existing data-driven process for prioritizing fire inspections or identifying new properties requiring inspection. In collaboration with AFRD, we developed the Firebird framework to help municipal fire departments identify and prioritize commercial property fire inspections, using machine learning, geocoding, and information visualization. Firebird computes fire risk scores for over 5,000 buildings in the city, with true positive rates of up to 71% in predicting fires. It has identified 6,096 new potential commercial properties to inspect, based on AFRD's criteria for inspection. Furthermore, through an interactive map, Firebird integrates and visualizes fire incidents, property information and risk scores to help AFRD make informed decisions about fire inspections. Firebird has already begun to make positive impact at both local and national levels. It is improving AFRD's inspection processes and Atlanta residents' safety, and was highlighted by National Fire Protection Association (NFPA) as a best practice for using data to inform fire inspections.
['Michael A. Madaio', 'Shang-Tse Chen', 'Oliver L. Haimson', 'Wenwen Zhang', 'Xiang Cheng', 'Matthew Hinds-Aldrich', 'Duen Horng Chau', 'Bistra N. Dilkina']
Firebird: Predicting Fire Risk and Prioritizing Fire Inspections in Atlanta
656,799
['Linas Laibinis', 'Elena Troubitsyna']
A Contract-Based Approach to Ensuring Component Interoperability in Event-B.
848,020
['Ioannis Caragiannis', 'Evi Papaioannou', 'Christos Kaklamanis']
Online Call Admission Control in Wireless Cellular Networks.
767,079
The authors report on the use of the codebook-excited linear-predictive (CELP) algorithm for 32 kb/s low-delay (LD-CELP) coding of wideband speech. The main problem associated with wideband coding, namely, spectral noise weighting, is discussed. The authors propose an enhanced noise weighting technique and demonstrate its efficiency via subjective listening tests. In these tests, involving 20 listeners and 8 test sentences, the average rating for the proposed 32 kb/s LD-CELP was essentially equal to that of the 65 kb/s standard (G.722) CCITT wideband coder. >
['Erik Ordentlich', 'Yair Shoham']
Low-delay code-excited linear-predictive coding of wideband speech at 32 kbps
148,939
This paper investigates the joint source-channel coding problem of sending a memoryless source over a memoryless degraded broadcast channel. An inner bound and an outer bound on the achievable distortion region are derived, which respectively generalize and unify several existing bounds. Moreover, when specialized to Gaussian source broadcast or binary source broadcast, the inner bound and outer bound could recover the best known inner bound and outer bound in the literature. Besides, the inner bound and outer bound are also extended to Wyner-Ziv source broadcast problem, i.e., source broadcast with degraded side information available at decoders. Some new bounds are obtained when specialized to Wyner-Ziv Gaussian case and Wyner-Ziv binary case.
['Lei Yu', 'Houqiang Li', 'Weiping Li']
Distortion bounds for source broadcast over degraded channel
729,183
In this paper we present an ecient discretization method for the solution of the unsteady incompressible Navier-Stokes equations based on a high order (Hybrid) Discontinuous Galerkin formulation. The crucial component for the eciency of the discretization method is the disctinction between sti linear parts and less sti non-linear parts with respect to their temporal and spatial treatment. Exploiting the exibility of operator-splitting time integration schemes we combine two spatial discretizations which are tailored for two simpler sub-problems: a corresponding hyperbolic transport problem and an unsteady Stokes problem. For the hyperbolic transport problem a spatial discretization with an Upwind Discontinuous Galerkin method and an explicit treatment in the time integration scheme is rather natural and allows for an ecient implementation. The treatment of the Stokes part involves the solution of linear systems. In this case a discretization with Hybrid Discontinuous Galerkin methods is better suited. We consider such a discretization for the Stokes part with two important features: H(div)-conforming nite elements to garantuee exactly divergence-free velocity solutions and a projection operator which reduces the number of globally coupled unknowns. We present the method, discuss implementational aspects and demonstrate the performance on two and three dimensional benchmark problems.
['Christoph Lehrenfeld', 'Joachim Schöberl']
High order exactly divergence-free Hybrid Discontinuous Galerkin Methods for unsteady incompressible flows
578,714
The physical constraints of smartwatches limit the range and complexity of tasks that can be completed. Despite interface improvements on smartwatches, the promise of enabling productive work remains largely unrealized. This paper presents WearWrite , a system that enables users to write documents from their smartwatches by leveraging a crowd to help translate their ideas into text. WearWrite users dictate tasks, respond to questions, and receive notifications of major edits on their watch. Using a dynamic task queue, the crowd receives tasks issued by the watch user and generic tasks from the system. In a week-long study with seven smartwatch users supported by approximately 29 crowd workers each, we validate that it is possible to manage the crowd writing process from a watch. Watch users captured new ideas as they came to mind and managed a crowd during spare moments while going about their daily routine. WearWrite represents a new approach to getting work done from wearables using the crowd.
['Michael Nebeling', 'Alexandra To', 'Anhong Guo', 'Adrian A. de Freitas', 'Jaime Teevan', 'Steven P. Dow', 'Jeffrey P. Bigham']
WearWrite: Crowd-Assisted Writing from Smartwatches
739,384
Inductive Logic Programming (ILP) is a combination of inductive learning and first-order logic aiming to learn first-order hypotheses from training examples. ILP has a serious bottleneck in an intractably enormous hypothesis search space. Thismakes existing approaches perform poorly on large-scale real-world datasets. In this research, we propose a technique to make the system handle an enormous search space efficiently by deriving qualitative information into search heuristics. Currently, heuristic functions used in ILP systems are based only on quantitative information, e.g. number of examples covered and length of candidates. We focus on a kind of data consisting of several parts. The approach aims to find hypotheses describing each class by using both individual and relational features of parts. The data can be found in denoting chemical compound structure for Structure-Activity Relationship studies (SAR). We apply the proposed method to extract rules describing chemical activity from their structures. The experiments are conducted on a real-world dataset. The results are compared to existing ILP methods using ten-fold cross validation.
['Cholwich Nattee', 'Sukree Sinthupinyo', 'Masayuki Numao', 'Takashi Okada']
Inductive Logic Programming for Structure-Activity Relationship Studies on Large Scale Data
458,075
Abstract#R##N##R##N#Web-based open source software development (OSSD) project communities provide interesting and unique opportunities for software process modeling and simulation. While most studies focus on analyzing processes in a single organization, we focus on modeling software development processes both within and across three distinct but related OSSD project communities: Mozilla, a Web artifact consumer; the Apache HTTP server that handles the transactions of Web artifacts to consumers such as the Mozilla browser; and NetBeans, a Java-based integrated development environment (IDE) for creating Web artifacts and application systems. In this article, we look at the process relationships within and between these communities as components of a Web information infrastructure. We employ expressive and comparative techniques for modeling such processes that facilitate and enhance understanding of the software development techniques utilized by their respective communities and the collective infrastructure in creating them. Copyright © 2005 John Wiley & Sons, Ltd.
['Chris Jensen', 'Walt Scacchi']
Process modeling across the web information infrastructure
216,462
In the internet of things IoT applications, cooperative spectrum sensing of cognitive radio sensor networks CRSN may use electronic objects or sensor nodes to cooperatively detect the spectrum of primary user. How to effectively fuse the local detection data and make a global decision is critical for CRSN. In order to improve the detection accuracy of CRSN, we propose a cooperative spectrum sensing scheme based on side information, and the scheme uses a cooperative spectrum sensing framework and an efficient clustering algorithm that can meet the specific requirements of IoT applications. Though mathematical modelling, minimising the missing detection probability is converted into clustering nodes. The clustering algorithm is to be used to find the optimal distance and localisation of nodes. Simulation results show that the proposed cooperative spectrum sensing scheme has better performance than the conventional cognitive radio models such as equal gain combining, and maximal ratio combining algorithm. In addition, the influence of sensor nodes' density and distance range on the sensing performance is also analysed.
['Wenjing Yue', 'Cong Wu', 'Zhi Chen']
Cooperative spectrum sensing based on side information for cognitive radio sensor networks in internet of things applications
899,723
In traditional B2C, customers' stocks are usually controlled by themselves, while the supplier needs only to meet requests of customers by sending right commodity on time. The operation mode results in disadvantages of hindering saving of cost and improvement of efficiency. Being one of the key contents in vendor managed inventory, inventory routing problem will help supplier to lower operation cost, improve efficiency and increase satisfaction of customers. In the paper, a heuristic algorithm is designed to solve stochastic inventory routing problem. In the algorithm, followed by converting stochastic demands of customers to regular demands, distribution period of every customer is calculated by the modified EOQ. Finally, approximate solution of stochastic inventory routing problem can be obtained by solving periodic vehicle routing problem
['Xie Binglei', 'An Shi', 'Wang Jian']
Stochastic inventory routing problem under B2C e-commerce
41,644
Behavior and strategy of computers(COM) have recently attracted considerable attention with regards to video games, with the development of hardware and the spread of entertainment on the Internet. Previous studies have reported strategy-acquisition schemes for board games and fighting games. However, there have been few studies dealing with the scheme applicable for video Trading Card Games (video TCG). We present an automatic strategy-acquisition system for video TCGs. The proposed strategy-acquisition system uses a sampling technique, Action predictor, and State value function for obtaining rational strategy from many unobservable variables in a large state space. Computer simulations, where our agent played against a Rule-based agent, showed that a COM with the proposed strategy-acquisition system becomes stronger and more adaptable against an opponent's strategy.
['Nobuto Fujii', 'Mitsuyo Hashida', 'Haruhiro Katayose']
Strategy-acquisition system for video trading card game
483,190
A single-user state-dependent channel with mismatched decoding is considered. Several setups are studied, which differ by the manner in which the state information is available to the encoder (causally or non-causally), and by whether or not the decoder is cognizant of the state sequence. We present achievable rates for these channels based on random coding and random binning and we also observe special cases. In the non-causal case (the mismatched Gelfand Pinsker channel) we introduce a scheme of layered binning and calculate the corresponding achievable rate.
['Yafit Feldman', 'Anelia Somekh-Baruch']
Channels with state information and mismatched decoding
921,445
In order to automatically generate high-quality game levels, one needs to be able to automatically verify that the levels are playable. The simulation-based approach to playability testing uses an artificial agent to play through the level, but building such an agent is not always an easy task and such an agent is not always readily available. We discuss this problem in the context of the physics-based puzzle game Cut the Rope, which features continuous time and state space, making several approaches such as exhaustive search and reactive agents inefficient. We show that a deliberative Prolog-based agent can be used to suggest all sensible moves at each state, which allows us to restrict the search space so that depth-first search for solutions become viable. This agent is successfully used to test playability in Ropossum, a level generator based on grammatical evolution. The method proposed in this paper is likely to be useful for a large variety of games with similar characteristics.
['Mohammad Hassan Farshbaf Shaker', 'Noor Shaker', 'Julian Togelius']
Evolving playable content for cut the rope through a simulation-based approach
573,122
A Sun-like star undergoes magnetic cyclic reversal shown by field lines colored by the longitudinal magnetic field. Shifts in positive and negative polarity demonstrate large-scale polarity changes in the star. One such cycle is shown, and at the end of the visualization, the magnetic poles are reversed. Wreath-like areas in the magnetic field could be the source of Sun spots which are an important area of study. Rendering was performed through the visualization tool VAPOR using the OpenGL interception program GLuRay which renders with the interactive ray tracer Manta. Visualizations were rendered on TACC Longhorn an NSF-XD Visualization resource.
['Carson Brownlee', 'Benjamin P. Brown', 'John Clyne', 'Chems Touati', 'Kelly P. Gaither', 'Charles D. Hansen']
Stellar magnetism
674,179
The paper explores a generalization of conditional random elds (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units in these models also learn to discover latent distributed structure in the data that improves classication. We derive ecient algorithms for inference and learning in these models by observing that the hidden units are conditionally independent given the data and the labels. Finally, we show that hiddenunit CRFs perform well in experiments on a range of tasks, including optical character recognition, text classication, protein structure prediction, and part-of-speech tagging.
['Laurens van der Maaten', 'Max Welling', 'Lawrence K. Saul']
Hidden-Unit Conditional Random Fields
558,497
We define a framework for search and retrieval tasks using cooperating autonomous agents. The significance of this work is our experimental demonstration that specialising the functionality of these agents can lead to increased efficiency, flexibility and scalability. We describe a model of cooperating autonomous agents with specialisations, as well as the simulation used to demonstrate the model. We frame our demonstration in terms of a search and retrieval task in an unknown environment by simulating multiple specialised autonomous robots. The agents require only the ability for movement, localised sensing and directed communication to perform their task.
['Randall Fletcher', 'Dan Corbett']
A framework for search and retrieval tasks using specialised cooperating autonomous agents
264,028
We describe a formal verification framework and tool implementation, based upon cyclic proofs, for certifying the safe termination of imperative pointer programs with recursive procedures. Our assertions are symbolic heaps in separation logic with user defined inductive predicates; we employ explicit approximations of these predicates as our termination measures. This enables us to extend cyclic proof to programs with procedures by relating these measures across the pre- and postconditions of procedure calls. We provide an implementation of our formal proof system in the Cyclist theorem proving framework, and evaluate its performance on a range of examples drawn from the literature on program termination. Our implementation extends the current state-of-the-art in cyclic proof-based program verification, enabling automatic termination proofs of a larger set of programs than previously possible.
['Reuben N. S. Rowe', 'James Brotherston']
Automatic cyclic termination proofs for recursive procedures in separation logic
963,277
In mobile computing the location information of the objects is an important factor in providing customized services to the users. The location-based service (LBS) is one of the technologies providing the services adoptively according to the changed user position. The existing filtering technology employed in the typical pub/sub model has some drawbacks such as useless connection and wasted queue space. In this paper we propose an efficient and dynamic location- based event service solving these problems. It is achieved by employing location monitor which dynamically manages the event channel. Performance evaluation shows that the proposed scheme provides much higher performance than the existing scheme when the number of suppliers is larger than six.
['Sun Wook Kim', 'Kyu Bong Cho', 'Seungwok Han', 'Hee Yong Youn']
Efficient and Dynamic Location-based Event Service for Mobile Computing Environments
260,421
This paper describes an experiment investigating interactions in a big forum in order to support students in learning English. Groups of collaborating users form communities that generate a lot of data that can be analyzed. We distinguish different phases of the self-regulated learning process and aim to identify them in learners' activities. Then we attempt to recognize patterns of their behavior and consequently their roles in a community. Based on this analysis we try to explain a success or failure of a community. We conclude that heterogeneity of members helps a learning community to function.
['Zinayida Petrushyna', 'Milos Kravcik', 'Ralf Klamma']
Learning Analytics for Communities of Lifelong Learners: A Forum Case
390,553
Molecular volume and molecular surface are expressed as a function of topological degree in alkane graphs. This allows not only a straightforward approach to calculate such physicochemical magnitudes but also an interpretation of the role of the local vertex invariant (LOVI) or valence degree, 6, as well as the connectivity indices in the prediction of physicochemical properties. The interpretation is based on the concept of molecular accessibility (as introduced by Estrada, J. Phys. Chem. A 2002, 106, 9085) for which precise mathematical definitions are provided.
['Jorge Galvez']
Prediction of molecular volume and surface of alkanes by molecular topology
576,641
RFID is one of the most prominent identification schemes in the field of pervasive systems. Nonline of sight capability makes RFID systems much better choice than its contended systems (such as barcode, magnetic tape, etc.). Since the RFID uses wireless channel for communication with its associated devices, there should be some optimal encryption methods to secure the communicating data from adversaries. Several researchers have proposed ultralightweight mutual authentication protocols (UMAPs) to secure the RFID systems in cost effective manner. Unfortunately most of the previously proposed UMAPs are later found to be vulnerable against various desynchronization, Denial of Service (DoS), traceability, and full disclosure attacks. In this paper, we present a more sophisticated UMAP to provide Strong Authentication and Strong Integrity (SASI) using recursive hash function. The proposed protocol incorporates only simple bitwise logical operators XOR, Rot, and nontriangular function (recursive hash) in its design, which can be efficiently implemented with a low cost passive RFID tag. The performance analysis of the protocol proves the conformation of the proposed protocol with EPC-C1G2 passive tags. In addition to privacy and security, small chip area (miniaturization) is another design constraint (which is mandatory requirement for a protocol to be considered as ultralightweight authentication protocol). We have also proposed and implemented the efficient hardware design of the proposed protocol for EPCC1G2 tags. Both the FPGA and ASIC implementation flows have been adopted. The FPGA flow is primarily used to validate the functionality of the proposed hardware design whereas ASIC flow (using TSMC 0.35 µm library) is used to validate the gate count. To the best of our knowledge, this is the first FPGA and ASIC implementation of any ultralightweight RFID authentication protocol.
['Umar Mujahid', 'Muhammad Najam-ul-Islam', 'Atif Raza Jafri', 'Qurat-ul-Ain', 'M. Ali Shami']
A new ultralightweight RFID mutual authentication protocol: SASI using recursive hash
646,875
The classic Costa's "writing on dirty paper" capacity result establishes that full state pre-cancellation can be attained in the Gel'fand-Pinsker channel with additive state and additive Gaussian noise. This result holds under the assumption that perfect channel knowledge is available at both transmitter and receiver: such an assumption is not valid in wireless scenarios affected by fading and with limited feedback rates. For this reason, we investigate the capacity of the "writing on fast fading dirt" channel, a variation of the "writing on dirty paper" channel in which the state sequence is multiplied by an ergodic fading process unknown at the transmitter. We consider two scenarios: when the fading process in not known at either the transmitter or the receiver and when it is known only at the receiver. For both cases we derive novel assessments of capacity which clearly indicate the limits of state pre-cancellation in the presence of fast fading.
['Stefano Rini', 'Shlomo Shamai']
On Capacity of the Writing onto Fast Fading Dirt Channel
832,766
['Carlos Ordonez', 'Yiqun Zhang']
Time Complexity and Parallel Speedup to Compute the Gamma Summarization Matrix.
995,553
Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In previous line of research, we have presented a conceptual and a logical model for ETL processes. In this paper, we describe the mapping of the conceptual to the logical model. First, we identify how a conceptual entity is mapped to a logical entity. Next, we determine the execution order in the logical workflow using information adapted from the conceptual model. Finally, we provide a methodology for the transition from the conceptual to the logical model.
['Alkis Simitsis']
Mapping conceptual to logical models for ETL processes
495,660
Despite recent efforts in opening up government data, developing tools for taxpayers to make sense of extensive and multi-faceted budget data remains an open challenge. In this paper, we present BudgetMap, an issue-driven classification and navigation interface for the budgets of government programs. Our novel issue-driven approach can complement the traditional budget classification system used by government organizations by reflecting time-evolving public interests. BudgetMap elicits the public to tag government programs with social issues by providing two modes of tagging. User-initiated tagging allows people to voluntarily search for programs of interest and classify each program with related social issues, while system-initiated tagging guides people through possible matches of issues and programs via microtasks. BudgetMap then facilitates visual exploration of the tagged budget data. Our evaluation shows that participantsâ awareness and understanding of budgetary issues increased after using BudgetMap, while they collaboratively identified issue-budget links with quality comparable to expert-generated links.
['Nam Wook Kim', 'Jonghyuk Jung', 'Eun-Young Ko', 'Song-Yi Han', 'Chang Won Lee', 'Juho Kim', 'Jihee Kim']
BudgetMap: Engaging Taxpayers in the Issue-Driven Classification of a Government Budget
656,907
Steady state visual evoked potentials (SSVEPs) are the brain signals induced by gazing at a constantly flickering target. Frame-based frequency approximation methods can be implemented in order to realize a high number of visual stimuli for SSVEP-based Brain-Computer Interfaces (BCIs) on ordinary computer screens. In this paper, we investigate the possibilities and limitations regarding the number of targets in SSVEP-based BCIs. The BCI-performance of seven healthy subjects was evaluated in an online experiment with six differently sized target matrices. Our results confirm previous observations, according to which BCI accuracy and speed are dependent on the number of simultaneously displayed targets. The peak ITR achieved in the experiment was 130.15 bpm. Interestingly, it was achieved with the 15 target matrix. Generally speaking, the BCI performance dropped with an increasing number of simultaneously displayed targets. Surprisingly, however, one subject even gained control over a system with 84 flickering targets, achieving an accuracy of 91.30%, which verifies that stimulation frequencies separated by less than 0.1 Hz can still be distinguished from each other.
['Felix Gembler', 'Piotr Stawicki', 'Ivan Volosyak']
Exploring the possibilities and limitations of multitarget SSVEP-based BCI applications
914,432
An unsupervised approach based on Information Bottleneck (IB) principle is proposed for detecting acoustic events from audio streams. In this paper, the IB principle is first concisely presented, and then the practical issues related to the application of IB principle to acoustic event detection are described in detail, including definitions of various variables, criterion for determining the number of acoustic events, tradeoff between amount of information preserved and compression of the initial representation, and detection steps. Further, we compare the proposed approach with both unsupervised and supervised approaches on four different types of audio files. Experimental results show that the proposed approach obtains lower detection errors and higher running speed compared to two state-of-the-art unsupervised approaches, and is little inferior to the state-of-the-art supervised approach in terms of both detection errors and runtime. The advantage of the proposed unsupervised approach over the supervised approach is that it does not need to pre-train classifiers and pre-know any prior information about audio streams.
['Yanxiong Li', 'Qin Wang', 'Xianku Li', 'Xue Zhang', 'Yuhan Zhang', 'Aiwu Chen', 'Qianhua He', 'Qian Huang']
Unsupervised detection of acoustic events using information bottleneck principle
972,030
This paper presents the results obtained by AROMA for its participation to OAEI. AROMA is an ontology alignment method that makes use of the association paradigm and a statistical interestingness measure, the implication intensity. AROMA performs a post-processing step that includes a terminological matcher. This year we do not modify this matcher.
['Jérôme David']
AROMA results for OAEI 2011
755,787
['Fela Winkelmolen', 'Viviana Mascardi']
Statistical Language Identification of Short Texts.
756,591
This paper analyzes the significance of the representation and reusability of SRML when being used in simulation models as well as its drawbacks. The paper also discusses the ways to extend SRML schema based on DEVS. The emphasis is placed on the elaboration of mapping DEVS onto SRML schema to formulate SRML's basic syntax and semantics for the structure and behavior representation of atomic model and coupled model. The model structure, such as property, input interface, output interface and sub-model composition, are described by a group of XML marks. The model behavior, such as external transition, internal transition, output and time-advance functions are described by script language and a group of standard interface offered by SRML simulator in script marks. The paper then reviews the SRML element architecture and finally gives a simulation demo of using SRML to build differential equation model.
['Chen Liu', 'Qun Li', 'Weiping Wang', 'Yifan Zhu']
Extend SRML schema based on DEVS: an executable DEVS language
152,368
Abstract#R##N#Effective nuclear charges of the main group elements from the second up to the fifth row have been developed for the one-electron part of the spin-orbit (SO) coupling Hamiltonian. These parameters, suitable to be used for SO calculations of large molecular systems, provide a useful and remarkably good approximation to the full SO Hamiltonian. We have derived atomic effective nuclear charges by fitting procedure. Computed fine-structure splitting (FSS) of the doublet and triplet II states of AH species (A is one of the abovementioned elements) have been chosen for this purpose. We have adopted the noniterative scheme, previously reported, according to which SO contributions can be calculated through direct coupling between the II states. The latter have been optimized at B3LYP level using DZVP basis sets. As surrogates for a large number of possible applications, we have widely employed the empirical parameters to compute II-FSSs of diatomic species for which experimental data are available. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2009
['Sandro Chiodo', 'Nino Russo']
One‐electron spin‐orbit contribution by effective nuclear charges
24,025
['Mushfiqul Alam', 'Pranita Patil', 'Martin T. Hagan', 'Damon M. Chandler']
A computational model for predicting local distortion visibility via convolutional neural network trainedon natural scenes
668,764
We introduce a framework for construction of non-separable multivariate splines that are geometrically tailored for general sampling lattices. Voronoi splines are B-spline-like elements that inherit the geometry of a sampling lattice from its Voronoi cell and generate a lattice-shift-invariant spline space for approximation in R d . The spline spaces associated with Voronoi splines have guaranteed approximation order and degree of continuity. By exploiting the geometric properties of Voronoi polytopes and zonotopes, we establish the relationship between Voronoi splines and box splines which are used for a closed-form characterization of the former. For Cartesian lattices, Voronoi splines coincide with tensor-product B-splines and for the 2-D hexagonal lattice, the proposed approach offers a reformulation of hex-splines in terms of multi-box splines. While the construction is for general multidimensional lattices, we particularly characterize bivariate and trivariate Voronoi splines for all 2-D and 3-D lattices and specifically study them for body centered cubic and face centered cubic lattices.
['Mahsa Mirzargar', 'Alireza Entezari']
Voronoi Splines
668,525
It seems clear that online teaching will be a growing proportion of teaching overall, both in the form of completely online courses and blended courses with significantly reduced face-to-face interaction.
['Gregory W. Hislop']
The Inevitability of Teaching Online
262,090
['Carlos Morales', 'Takeshi Oishi', 'Katsushi Ikeuchi']
Turbidity-based aerial perspective rendering for mixed reality.
752,071
This paper concerns application of feedback in LT codes. The considered type of feedback is acknowledgments, where information on which symbols have been decoded is given to the transmitter. We identify an important adaptive mechanism in standard LT codes, which is crucial to their ability to perform well under any channel conditions. We show how precipitate application of acknowledgments can interfere with this adaptive mechanism and lead to significant performance degradation. Moreover, our analysis reveals that even sensible use of acknowledgments has very low potential in standard LT codes. Motivated by this, we analyze the impact of acknowledgments on multi layer LT codes, i.e. LT codes with unequal error protection. In this case, feedback proves advantageous. We show that by using only a single feedback message, it is possible to achieve a noticeable performance improvement compared to standard LT codes.
['Jesper Hemming Sørensen', 'Petar Popovski', 'Jan Østergaard']
On the Role of Feedback in LT Codes
48,521
As part of its many functions, the reference library is charged with developing both its collection and its user community. These two functions are sometimes pursued as separate initiatives (with separate funding) by library managers. In Australia, the State Library of Queensland (SLQ) is committed to an exciting policy of simultaneous collection development and community engagement by integrating public programs with new media technologies. SLQ’s Mobile Multimedia Laboratory is a purpose-designed portable digital creativity workshop which is made available to communities as a powerful platform to capture and disseminate local digital culture, and also to promote and train community members in information literacy. The Mobile Multimedia Laboratory facility operates in conjunction with SLQ’s Queensland Stories project, an exciting portal for the display and promotion of community co-created multimedia. Together, the Mobile Multimedia Laboratory and the Queensland Stories initiatives allow the SLQ to directly engage with existing and new communities, and also to increase its digital collection with community created content. Not only are both initiatives relatively cost-effective, they have a positive impact upon information literacy within the state.
['Jerry Watkins', 'Angelina Russo']
Developing communities and collections with new media and information literacy
839,240
Abstract HYPERchannel [1,3,5,7] is a local network configuration with random access to the global transmission medium and with ‘Carrier Sense’ techniques. Conflicts are resolved by a fixed priority scheduling of retransmissions. This paper presents a performance evaluation of HYPERchannel access protocols for different types of user systems. Analytical results and asymptotic relations are obtained for the throughput of HYPERchannel under different load characteristics. The fixed priority rule for retransmissions discriminates against low priority users; a modification of the access scheme (‘Fair HYPERchannel’) is proposed which compensates for this undesirable effect. The performance of this new protocol version is also evaluated. In a final section, the access schemes of HYPERchannel are compared with similar access protocols.
['Otto Spaniol']
Analysis and performance evaluation of HYPERchannel access protocols
391,621
End users can create kinds of mashups which combine various data-intensive services to form new services. The challenging issue of data-intensive service mashup is how to find service from a great deal of candidate services while satisfying SLAs. In this paper, Service-Level Agreement (SLA) consists of two parts, which are SLA-Q and SLA-T. SLA-Q (SLA-T) indicates the end-to-end QoS (transactional) requirements. SLA-aware service mashup problem is known as NP-hard, which takes a significant amount of time to find optimal solutions. The service correlation also exists in data-intensive service mashup problem. In this paper, the service correlation includes the functional correlation and QoS correlation. For efficiently solving the dataintensive service mashup problem with service correlation, we propose an approach GTHFOA-DSMSC (Dataintensive Service Mashup with Service Correlation based on Game Theory and Hybrid Fireworks Optimization Algorithm) which evolves a set of solutions to the Pareto optimal front. The experimental tests demonstrate the effectiveness of the algorithm.
['Wanchun Yang', 'Chenxi Zhang', 'Bin Mu']
Data-intensive Service Mashup Based on Game Theory and Hybrid Fireworks Optimization Algorithm in the Cloud
600,521
An LMS closed-loop time-delay estimator is presented. It uses the error between two samples of the incoming signal (the difference between the delayed signal and the reference signal passed through a known delay) as a performance index for the estimator. The LMS algorithm adaptively controls the delay so as to minimize the mean square of this error. The controlled delay is implemented using surface acoustic wave devices. Certain design conditions are applied, resulting in a unique minimum for the performance surface. It is shown that the proposed estimator is unbiased and has a small variance if the input signal occupies most of the system bandwidth. In fact, the variance depends on the input noise power and the generalized noise-to-signal power ratio, R"_{n}(0)/R"_{s}(0) , as well as on the loop gain. The analysis performed also gives a bound on the loop gain required for convergence of the estimator and predicts its rate of convergence. Computer simulation results show good agreement with the theory.
['Hagit Messer', 'Y. Bar-Ness']
Closed-loop least mean square time-delay estimator
434,513
Induction of predictive models is one of the most frequent data mining tasks. However, for several domains, the available data is unlabeled and the generation of a class label for each instance may have a high cost. An alternative to reduce this cost is the use of active learning, which selects instances according to a criterion of relevance. Diverse sampling strategies for active learning, following different paradigms, can be found in the literature. However, there is no detailed comparison between these strategies and they are usually evaluated for only one classification technique. In this paper, strategies from different paradigms are experimentally compared using different learning algorithms and datasets. Additionally, a multiclass hypothesis space search called SG-multi is proposed and empirically shown to be feasible. Experimental results show the effectiveness of active learning and which classification techniques are more suitable to which sampling strategies.
['Davi Pereira dos Santos', 'André Carlos Ponce Leon Ferreira de Carvalho']
Comparison of Active Learning Strategies and Proposal of a Multiclass Hypothesis Space Search
310,047
Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.
['Muhammad Ilyas', 'Kuk Cho', 'Seung-Ho Baeg', 'Sangdeok Park']
Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field
863,151
We propose a low hardware overhead mechanism for internal FPGA configuration check and repair. The approach is effective against soft errors in the configuration memory (i.e., the errors caused by high energy radiation also known as Single Event Upsets). The proposed recovery mechanism occupies less hardware resources and has the shortest fault recovery time than the solutions reported so far.
['Uros Legat', 'Anton Biasizzo', 'Franc Novak']
FPGA Soft Error Recovery Mechanism with Small Hardware Overhead
481,294
In this work, we propose a web personalization methodology by detecting the user navigation’s orientation. To this goal, we track user’s hits and define whether a user specializes or generalizes his navigation through semantics analysis of the pages in his session window. An algorithm of capturing user session orientation based on a concept taxonomy is proposed. Finally, a method is presented which proposes useful recommendation to the user to improve his web search. The experimental outcomes satisfied the prospective results. We believe that our technique could become a useful tool for navigation refinement. Furthermore, this work bridges search engine query refinement and browsing reformulation techniques.
['John D. Garofalakis', 'Theodoula Giannakoudi', 'Danai Vergeti']
Capturing User Session Orientation Based on Semantic Analysis and Concept Taxonomy
483,016
['Francisco José García-Peñalvo', 'Ángel Fidalgo-Blanco', 'María Luisa Sein-Echaluce', 'Miguel Á. Conde']
Cooperative Micro Flip Teaching
845,816
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
33
Edit dataset card