Datasets:
_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
7b7f37ca-e70e-416d-927f-2fa3db62e3d5 | Food recommendation has become an essential method to help users adopt healthy dietary habits [1]}.
The task of computationally providing food and diet recommendations is challenging, as thousands of food items/ingredients have to be collected, combined in innovative ways, and reasoned over [2]}.
Furthermore, there are many facets to the foods we consume, such as our ethnic identities, socio-demographic backgrounds, life-long preferences, all of which can inform our perspectives about the food we choose to consume to lead healthy lives.
Food recommendation can get even more complicated when the food options available to an individual are further constrained because of a group setting
(e.g., the seafood allergy of one family member may preclude recipes including shrimp to be recommended to the whole group). There is `no size fits all,' and even dietetic professionals have raised concerns that such varied dimensions need to be incorporated in their food recommendation advice [3]}.
Such varied dimensions set up a need to provide food-related explanations to enhance the users' trust in recommendations made by food recommender systems, both automatic and human-driven, as users are more likely to follow the advice when the reasons for the advice are provided understandably.
Although explanations could help users trust in recommendations and encourage them to follow good eating habits, the inclusion of explanations into food recommender systems has not yet received the interest it deserves in the available literature [1]}.
Therefore, this work aims to bridge the gap between existing food recommendation systems by providing semantic modeling of explanations required in the complex and ever-expanding food and diet domain.
| i |
0f83b85b-e18d-4055-b5aa-9bf7096c42cb | We introduce and discuss the FEO that extends the Explanation Ontology [1]} and the FoodKG (a food knowledge graph that uses a variety of food sources) [2]} to model explanations in the food domain, a connection that is currently lacking in the current literature [3]}.
Our ontology can be classified under the post-hoc wing of Explainable Artificial Intelligence(XAI) and aims to interpret the results of black box AI recommender systems in a human-understandable manner [4]}, [5]}. Accordingly, using a recommender system agnostic model, we aim to retroactively create connections between the system and the recommendation,including modeling user details, such as allergies and likes, system details, such as location and time, and question details, such as parameters.
We then assemble explanations by querying the ontology for different templates of knowledge types, defined using formalizations of explanation types.
We add structure to the auxiliary modeling of the user, and the system, which we find are important components of comprehensive explanations to represent a range of explanation types [1]} (e.g., contextual, contrastive, counterfactual explanations) to model food-specific explanations, which would complement personalized, knowledge-based food recommendation applications such as the `Health Coach,' a healthy food recommendation service [7]}.
| i |
61d8f60a-917b-4807-87c2-5b7e5c07f0ae | Prior work has shown that users seek answers and reasoning for nutrition and food questions they might have
[1]}, [2]}, [3]}, [4]}, [5]}, [6]}.
However, users are increasingly concerned with the evidence and reasoning that lead to those claims.
Applying logic, reasoning, and querying on food and culinary arts have captured information, such as food categorization in FoodOn [7]}, recipes, and associated information in RecipeDB [8]}, and have brought together disparate sources of food information [9]}, [10]}.
In the area of food recommendations, many existing approaches recommend recipes based on the recipe content (e.g., ingredients) [5]}, [1]}, [6]}, user behavior history (e.g., eating history) [1]}, [2]}, [3]}, or dietary preferences [4]}, [3]}. However, none of these systems provide the rationales for why the food was recommended, as these systems are utilizing black-box, deep learning models. Conversely, while there are systems that employ post hoc XAI methods to provide explanations for opaque AI systems [19]}, [20]}, they have not yet been applied in the food recommendation domain.
Our work differs from such previous works because we leverage a greater degree of explicit, semantic information about foods and other related semantically annotated data in generating explanations about recommending a food item or answering specific questions about a particular recommendation.
Additionally, the need to provide more user-centered explanations that help users improve trust and understanding of AI systems, and the information used for recommendations, has been gaining attention [21]} lately. There have been some conceptual frameworks [22]} and ontologies [23]} that attempt to model explanations from an end-user perspective. In our work, we aim to ground these more general-purpose efforts for the food domain.
| w |
13508314-3af5-446f-b353-7d1105b1ce8e | [1]} describe a system based on logical reasoning that supports monitoring the users' behaviors and persuades them to follow healthy lifestyles, including recommending suitable food items, with natural language explanations[1]}. Their system performs reasoning to understand whether the users follow an unhealthy behavior regarding a food intake input.
Then the system generates the persuasion message with explanations using natural language templates [1]}.
Our proposed, ontology-based method for generating explanations is complementary to their approach because we provide support for various types of explanations, not just trace-based explanations derived from templates for explaining the reasoner result.
We believe that by supporting different types of explanations, system developers will be supporting more user-friendly interfaces for personalized, consumer-facing applications [4]}.
The primary aim for FEO is usage in more interactive or conversational food recommendations, for example, in a personalized health recommendation app.
| w |
1587047b-085e-41aa-8e80-6481f716f583 | We employ a task-based evaluation [1]} for our ontology using three main competency questions, each aimed at addressing a different explanation type that we attempted to extract from our model in FEO, as detailed in the following section. We have used competency questions as our method of evaluation as they are the accepted standard to “evaluate the ontological commitments that have been made" [2]}.
| m |
f4dc6ca4-ecfd-46cc-aa29-eda892367f86 | As our model endeavors to provide explanations and context to users that get lost in black box AI models, we chose to evaluate the FEO by its ability to provide responses to a subset of important explanation types. In tbl:explanationTable, we have included a list of previously identified explanation types and the corresponding questions that might require an explanation for its food-related recommendation.
Post-hoc explanations provide an approximation of the rationales that users might be looking for [1]}, which is what we wanted to tackle with the competency questions.
| m |
62489ef1-231c-48cb-991b-8eb8488137e0 | We support our choice in the selection of a subset of explanation types from Table REF for our evaluation via competency questions with observations from recent advances in the machine learning community, where we noticed that there is a focus on methods that generate contrastive and counterfactual explanations [1]}. Moreover, contrastive, counterfactual, and contextual explanations also contain explanations with scientific evidence, everyday evidence, and system trace. Therefore, an evaluation using these explanation types would also allow FEO to include other explanation types. Hence, we have completed our initial modeling to allow for contrastive, counterfactual, and contextual explanations and framed the evaluation of our ontology by these explanation types.
| m |
da1f6953-f422-4460-8d0b-a6c422d9755a | We undertook this process against the recommendations generated by the Health Coach Application, which uses machine learning techniques to assess users' dietary needs and provide recommendations [1]}.
| m |
74f18154-445c-46c4-ac77-820d2a63878d | In this paper, we have discussed an ontology modeling for food and diet recommendation explanations, which aims to model and then be used to generate explanations specific to the context of the users and setting.
FEO is a domain-specific ontology where the domain concepts are abstracted up in a manner so that they can be comprehensively exposed to a user in the form of a diverse range of explanations. The class and property relationships that we detailed
enable using simple queries to get explanations that explore many different variables. We strove to maintain the simplicity of the queries in order to ensure that a non-technical user can access explanations just as effectively as a technical user. From the modeling perspective, we found that the food domain would benefit from semantically bound explanations because of the variety of questions that a user might ask and the corresponding variety of explanations that they might require. From the user perspective, we chose the food domain specifically because food and diet are something that a growing number of people are concerned with, and we believe that our ontology can empower users to make informed decisions from their food choices. We plan to continue this work to extend the range of explanations that we can provide and increase accessibility to the tool by incorporating it into a more user-facing recommendation environment.
| d |
b38859e2-3072-4811-b009-8f13df445a9f | Photolithography is one of the most important processes for semiconductor manufacturing . An exemplar photolithography system is shown in Fig. REF . A laser beam that goes through the integrated circuit patterns on the reticle is projected on the wafer so that the patterns are printed onto the wafer. During this process, the stage carrying the wafer (the wafer stage) needs to move steadily and precisely so that the patterns are printed accurately [1]}.
<FIGURE> | i |
42a82b45-2d55-43c8-857a-6e6ae00d0db7 | With the technological development of the semiconductor industry, manufacturers demand more precise performance from the wafer stage. To achieve the goal, researchers have developed many control strategies and applied them on the wafer stage, including iterative learning control (ILC) [1]}, [2]}, [3]}, [4]}, sliding mode control (SMC) [5]}, [6]}, [7]}, [8]}, \(H \infty \) feedback control [9]}, multi-rate control [10]} and so on. Among them, SMC has attracted great attention for its simple implementation and robust performance in the presence of uncertainties and external disturbances [11]}, [12]}.
Beyond the basic SMC structure [13]}, many advanced SMC strategies such as the modified reaching laws [14]}, boundary layer technique [15]}, and super-twisting algorithm (STA) [16]} have also been proposed. The STA, focusing on improving the dynamics of the sliding variables, has been considered as one of the most effective approaches for the well-known chattering phenomenon [17]}. It is also robust with respect to bounded uncertainties and disturbances [18]}, [19]}, and has been implemented successfully in practice [18]}, [21]}.
| i |
a30dbe41-0845-4204-b74b-18f570de6865 | To further improve the performance of SMC, fractional-order calculus is introduced to improve the state dynamics in the sliding surface and is combined with the STA [1]}, [2]}. Although there are some theoretical research and successful precedents for the application of fractional-order super-twisting algorithm (FOSTA) [2]}, [4]}, the parametric uncertainties are not always taken into consideration, or their bounds are assumed to be small. When the amplitudes of the uncertainties or disturbances are rather large, the sliding variable in STA cannot converge to the predefined sliding surface. Instead, it only converges to an uncertainty region around the sliding surface, which inevitably brings positioning error to the system [5]}. Previous researches have tried to reduce the range of the uncertainty region to improve the precision [6]}, but the negative influence from large uncertainties remains.
| i |
a4a45dd6-43db-4077-bdbf-07be5ada5db6 | Aiming to improve the control performance (i.e., high precision and robust performance) in the presence of large model uncertainties and disturbances, a novel adaptive neural network and fractional-order super-twisting algorithm (ANN-FSA) is proposed in this paper. Firstly, we use the radial basis function (RBF) neural network to approximate the uncertainties and disturbances in the system, and a corresponding fractional-order super-twisting controller is designed to compensate for uncertainties and disturbances. The stability of the proposed control strategy is also analyzed. Moreover, to guarantee the global convergence of the closed-loop system, an adaptive law is designed. At last, we apply the proposed controller to a wafer stage testbed. Experimental results show that the controller performs well and is robust against disturbances.
| i |
fc40adda-f282-413d-b8bc-29f0affdf1e1 | The remainder of this paper is organized as follows: Section provides the model of the wafer stage. Section presents the proposed controller, and the stability analysis of the controller. Section displays the experimental setup and the experimental results with the proposed controller. Finally, Section presents conclusions.
| i |
9ab84482-14ea-46f1-90f7-0358109893dc | The overall structure of our experimental system is depicted in Fig. REF . The control algorithm is programmed in the LabView environment on the host computer. The host computer is connected with the remote controller (PXI 7831, from National Instruments) via Ethernet, so that the control algorithm can be deployed in the LabView Real-Time system in the remote controller. The output of the controller is amplified by an amplifier (TA330, from Trust Automation) and applied to the wafer stage testbed. The position of the moving part of the wafer stage is measured by a laser ranging system (from Keysight), and the measuring results are fed back to the remote controller. The nominal parameters of the wafer stage have been identified as \(\bar{A}=-1.092~s^{-1}\) and \(\bar{B}=3.9124~m/(s^2\cdot A)\) .
| m |
e652aa8a-b9f6-4b10-915a-3e8fc5bcaea5 | We implement the traditional PID controller, the SMC, the advanced FOSTA, and the proposed ANN-FSA to the wafer stage testbed to investigate the effectiveness of the proposed controller. The reference trajectory is shown in Fig. REF . The scan length is set as \(0.04\) m, and the scan velocity is set as \(0.032\) m/s. The parameters in each controller are tuned so that the best performance of each controller is achieved. The sampling interval of the experiments are set as 1 ms. For the RBF neural network, the number of the hidden nodes is set as 5, other parameters are selected as \(c_1=[-3~-1~0~1~3]\) , \(c_2=[-7~-3~0~3~7]\) , \(b=[50~50~50~50~50]\) , \(\rho =0.2\) and the initial value of \(\mathbf {W}\) is set as \(\mathbf {0}\) . In the fractional-order super-twisting algorithm, \(\eta \) and \(a\) are selected as \(\frac{1}{2}\) , and other parameters are tuned as \(h_1=500\) , \(h_2=30\) , \(\alpha _1=0.001\) and \(\alpha _2=175\) .
<FIGURE><FIGURE><FIGURE> | r |
4af855ba-1917-4364-b0d1-ab02667d4fd3 | Moreover, to study the robustness of these controllers, with all the parameters maintained the same, an extra external sinusoidal disturbance is generated and applied to the system. The amplitude and frequency of the disturbance signal are set as \(0.03\) m (rather large compared with referce signal) and 1 Hz, respectively. We denote the situation without extra disturbance as Case 1 and the situation with the additional disturbance as Case 2. The tracking performance in these two cases is shown in Fig. REF and Fig. REF , respectively.
<TABLE> | r |
56d37cd8-83fe-4a3e-9a2a-2d35e7a964f4 | In Fig. REF , we note that all the four controllers have large tracking errors when the scanning velocity changes. The peak error of SMC is the largest among the four controllers, which is at around 50 \(\mu \) m. Tracking error via the proposed ANN-FSA is the smallest, about 35 \(\mu \) m. We also note that the errors of SMC and ANN-FSA decay faster than the PID controller, but that the error via the PID controller is smoother than the other two controllers when the tracking errors are small, i.e., closer to zero. Figure REF shows that even in the presence of disturbances, the ANN-FSA can achieve the smallest tracking error. Moreover, from Fig. REF and Fig. REF , we note that the tracking error increases when disturbances exist. This means that the PID controller is not as robust enough as SMC and ANN-FSA. To quantitatively describe the robustness, we calculate the root mean square (RMS) errors of each controller in both cases and the results are displayed in Table. REF . We can see that compared with traditional SMC, the precision of FOSTA is significantly improved. This is due to the introduction of the fractional-order sliding surface and the super-twisting algorithm. Further, the proposed ANN-FSA has the smallest RMS error in both Case 1 and Case 2. According to the difference values between the RMS errors in Case 1 and Case 2, we can conclude that SMC and FOSTA strategy is more robust than the PID controller. ANN-FSA remains precise in the presence of the large external disturbances. The results and comparison prove that the proposed control scheme achieves the best performance in terms of precision and robustness.
| r |
445f6fda-7cc7-4613-b911-4257b1b73aa9 | In this paper, an adaptive neural-network and fractional-order super-twisting algorithm was proposed and applied to a precision motion system. In this way, not only the dynamics of the states on the sliding surface was improved via the super-twisting algorithm, but also unknown model uncertainties and disturbances of the system were well compensated. Moreover, an adaptive law was derived for the neural-network-based controller so that the closed-loop system is globally convergent. Both stability analysis and experimental verification were provided. The comparison results among a PID controller, a conventional SMC, an advanced FOSTA and the proposed ANN-FSA showed that the proposed controller could achieve higher precision and better robustness than conventional controllers.
| d |
d04d3ac9-d3db-4a35-9cc5-52568934052a | The number field sieve is the most efficient method known for solving the integer
factorization problem and the discrete logarithm problem in a finite field,
in the most general case. However there are many different variants of the number field
sieve, depending on the context. Recently, the Tower Number Field Sieve (TNFS) was
suggested as a novel approach to computing discrete logs in a finite field of
extension degree \(> 1\) . The Extended Tower Number Field Sieve (ExTNFS) is a variant
of TNFS which applies when the extension degree \(n\) is composite, and gives the best known
runtime complexity in the medium characteristic case (see below).
| i |
b888a7a7-6fd5-4e32-89df-a763ea276ff9 | We briefly discuss the asymptotics of number field sieve-type algorithms.
We define the following function:
\(L_{p^n}(\alpha ,c) = \exp ((c+o(1))(\log {p^n})^\alpha (\log {\log {p^n}})^{1-\alpha }).\)
| i |
e3f7862f-da28-4b54-99a4-474b51e0d15c | This function describes the asymptotic complexity of a subexponential function in
\(\log {p^n}\) , which is used to asses the complexity of the number field sieve for
computing discrete logs in \(\mathbb {F}_{p^n}\) . For a given \(p^n\) , there are two important
boundaries, respectively for \(\alpha = 1/3\) and \(\alpha = 2/3\) . We then have 3 cases:
small characteristic, when \(p < L_{p^n}(1/3,\cdot )\) , medium characteristic, when
\(L_{p^n}(1/3,\cdot ) < p < L_{p^n}(2/3,\cdot )\) , and large characteristic,
when \(p > L_{p^n}(2/3,\cdot )\) . This work relates to the medium characteristic case.
| i |
c1e2cd57-d423-4fbf-838a-94857b91ae4e | The structure of this paper is as follows: In Section 2, we give an overview
of computing discrete logs using ExTNFS. In section 3, sieving in a 4d box (orthotope)
is described, and we give implementation details. In section 4, we describe the
descent step in detail. In section 5 we give details of the record computation
in \(\mathbb {F}_{p^4}\) of 512 bits field size. In section 6 we conclude and outline future
possible work.
| i |
a5705097-c41c-4e30-ad7a-8b60fe2905b4 | We implemented the key components of the Extended Tower Number Field Sieve and together
with linear algebra components of CADO-NFS demonstrated a total discrete log break
in a finite field \(\mathbb {F}_{p^4}\) of size 512 bits, a new record. This provides another
data point in the evaluation of security of systems dependent on the intractability
of discrete log attacks in extension fields. Whereas the recent articles
[1]}, [2]} show that asymptotically, sieving in a d-dimensional
sphere is optimal as \(d \rightarrow \infty \) , there seems to be room at lower
dimensions for sieving in an orthotope to remain competitive. We did not optimize our code
particularly well and there are probably further gains in sieving speed possible.
One such idea is to replace our list/sort approach with bucket sieving
[3]}, which would
at least improve the memory footprint (although for our sieving dimensions this was
not a problem). The parameters for sieving were
tuned in an ad-hoc way and a finer examination of optimal parameters would be interesting.
It would be a fairly easy change to adjust the sieving shape to best suit a given
special-\(\mathfrak {q}\) , for rectangular sieving orthotopes, improving the relation yield.
Finally, we did not exploit the common Galois automorphism of the sieving polynomials,
which would have cut the sieving time in half.
The overall timings of the key stages of our computation are shown in
table REF .
<TABLE> | d |
2ccd93b0-7f49-401c-b8ee-6cf316cb9ade | Neural methods for generating entity embeddings have become the dominant approach to representing entities, with embeddings learned through methods such as pretraining, task-based training, and encoding knowledge graphs [1]}, [2]}, [3]}.
These embeddings can be compared extrinsically by performance on a downstream task, such as entity linking (EL).
However, performance depends on several factors, such as the architecture of the model they are used in and how the data is preprocessed, making direct comparison of the embeddings hard.
| i |
49515884-2caf-4584-b156-05c7eede690a | Another way to compare these embeddings is intrinsically using probing tasks [1]}, [2]}, which have been used to examine entity embeddings for information such as an entity's type, relation to other entities, and factual information [3]}, [4]}, [5]}, [6]}.
These prior examinations have often examined only a few methods, and some propose tasks that can only be applied to certain classes of embeddings, such as those produced from a mention of an entity in context.
| i |
9f0a29f7-d76d-4a93-a126-b10c958003c1 | We address these gaps by comparing a wide range of entity embedding methods for semantic information using both probing tasks as well as downstream task performance.
We propose a set of probing tasks derived simply from Wikipedia and DBPedia, which can be applied to any method that produces a single embedding per entity.
We use these to compare eight entity embedding methods based on a diverse set of model architectures, learning objectives, and knowledge sources.
We evaluate how these differences are reflected in performance on predicting information like entity types, relationships, and context words.
We find that type information is extremely well encoded by most methods and that this can lead to inflated performance on other probing tasks.
We propose a method to counteract this and show that it allows a more reliable estimate of the encoded information.
Finally, we evaluate the embeddings on two EL tasks to directly compare their performance when used in different model architectures, identifying some that generalize well across multiple architectures and others that perform particularly well on one task.
| i |
58f8acc2-f63f-43df-919a-0f82dafd1491 | We aim to provide a clear comparison of the strengths and weaknesses of various entity embedding methods and the information they encode to guide future work.
Our probing task datasets, embeddings, and code are available online.https://github.com/AJRunge523/entitylens
| i |
b01f524d-cd9b-49c2-b55f-d6030c20b885 | Many of our embedding methods have been evaluated on EL tasks in prior work, either in a separate model or as full EL models themselves.
However, direct comparison of the impact of of the embeddings on EL performance is confounded by differences in the architectures which leverage the embeddings, as well as difficult to reproduce differences in candidate selection, data preprocessing, and other implementation details.
To address this, we evaluate all of our embeddings in a consistent framework, testing them on two standard datasets in three different EL model architectures to directly compare the contribution of the embeddings to performance on the downstream task and how well they perform across different model architectures.
| m |
776298b8-cc2e-4cb1-9315-d6455ccebb64 | We test the embeddings using three EL models on two standard EL datasets, the AIDA-CoNLL 2003 dataset [1]} and the TAC-KBP 2010 dataset [2]}.
Two of our EL models are the CNN and RNN EL models used to generate our task-learned embeddings.
Our third is a transformer model based on the RELIC model of [3]} that encodes a 128-word context window around the entity mention using uncased DistilBERT-base [4]}https://huggingface.co/distilbert-base-uncased.
We compare the embedding of the CLS token in the final layer to a separate entity embedding for each candidate entity using a weighted cosine similarity.
To compare the impact of the entity embeddings, we replace the candidate document convolution in the CNN model or the randomly initialized embeddings in the RNN and transformer models with the pretrained embeddings during training.
Details about dataset preprocessing, candidate selection, and model training can be found in Appendix .
| m |
42f87d63-7602-4205-b341-1a2cfb5bd6d5 | In this work, we propose a new set of probing tasks for evaluating entity embeddings which can be applied to any method that creates one embedding per entity.
Using these tasks, we find that entity type information is one of the strongest signals present in all but one of the embedding models, followed by coarse information about how likely an entity is to be mentioned.
We show that the embeddings are particularly able to use entity type information to bootstrap their way to improved performance on entity relationship and factual information prediction tasks and propose methods to counteract this to more accurately estimate how well they encode relationships and facts.
| d |
94138ed2-c856-4c13-919d-aafbc8d72777 | Overall, we find that while BERT-based entity embeddings perform well on many of these tasks, their high performance can often be attributed to strong entity type information encoding.
More specialized models such as Wikipedia2Vec are better able to detect and identify relationships, while the embeddings of [1]} better capture the lexical and distributional semantics of entities.
Additionally, we provide a direct comparison of the embeddings on two downstream EL tasks, where the models that performed well on the probing tasks such as Ganea, Wiki2V, and BERT performed best on the downstream tasks.
We find that the best performing embedding model depends greatly on the surrounding architecture and encourage future practitioners to directly compare newly proposed methods with prior models in a consistent architecture, rather than only compare results.
| d |
cea9e13d-9656-4fc0-b6f0-7e137bcc04cd | Our work provides insight into the information encoded by static entity embeddings, but entities can change over time, sometimes quite significantly.
One future line of work we would like to pursue using our tests is to investigate how changes in entities over time can be reflected in the embeddings, and how those changes could be modeled as transformations in the embedding space.
Context-based embeddings in particular could then be dynamically updated with new information, instead of being retrained from scratch.
| d |
55d4f39c-c6c2-4370-b6ec-f2881f0a142e | Bipolar disorder (BD) is a mental health condition that causes extreme mood swings like emotional highs (mania, hypomania), lows (depression), mixed episodes where depression and manic symptoms occur together. The diagnosis of bipolar disorder requires lengthy observations on the patient. Otherwise, it can be mistaken with other mental disorders like anxiety or depression. The disease affects 2% of the population, and sub-threshold forms (recurrent hypomania episodes without major depressive episodes) affect an additional 2% [1]}. It is ranked as one of the top ten diseases of disability-adjusted life year (DALY) indicator among young adults, according to World Health Organization [2]}. It takes 10 years on average to diagnose bipolar disorder after the first symptoms [3]}.
| i |
90748098-c9f9-463c-88e7-7602a0e028d0 | In bipolar disorder, the clinical appearance of the patients changes based on the moods they are in. The changes are seen in both their sound and visual appearance, as well as the energy level changes. In the manic episode, the speech of the patient becomes louder, rushed, or pressured. The patient can be very cheerful, furious, or overly confident. The movements of the patient become more active, exaggerated, and they tend to wear very colorful clothes. Feelings and the state of mind change quickly. Racing thoughts, reduced need for sleep, lack of attention, increase in targeted activity (work, school, personal life) are some situations patients can experience in the manic episode. These symptoms return to a normal state during the remission state [1]}.
| i |
be8152d4-ad99-4786-8cce-08a683992dfa | Today, the diagnosis of mental health disorders rely on questionnaires done by psychiatrists and reports from patients and their caregivers. Psychiatrists perform some tests to collect information about the patient's cognitive, neurophysiological, and emotional situations [1]}. But these reports are subjective, and there is a need for more systematic and objective diagnosis methods. Especially, with the COVID-19 pandemic, remote treatment and diagnosis gain importance, which can be achieved using automated methods.
| i |
5f241116-9423-4597-bd70-e3959432727e | One of the tools used to rate the severity of the manic episodes of a patient is the Young Mania Rating Scale (YMRS). During the interviews, psychiatrists observe the patient's symptoms and give ratings to them. The 11 items in YMRS assess the elevated mood, increased motor activity-energy, sexual interest, sleep, irritability, speech rate and amount, language-thought disorder, content, disruptive-aggressive behavior, appearance, and insight. Most of these can be observed from speech patterns, body or facial movements, and the content of what was spoken during the interview.
| i |
25a9b2d4-7baf-4d5c-b789-10443582124c | Recent advancements in technologies like social media, smartphones, wearable devices, and improvements in recording techniques like better cameras, neuroimaging techniques, microphones enable us to gather good quality data from people during their everyday lives. This creates an opportunity to create tools to monitor the symptoms of the patients in longer periods, screen patients before they see the psychiatrists, complement clinicians in the diagnosis, and capture their behaviors in situations where they cannot act or hide the symptoms.
| i |
9e3e27f1-7c93-4de6-9384-b8717ec2f287 | In recent years, there are many works on diagnosing psychiatric disorders like Alzheimer's disease, anxiety, attention deficit hyperactivity disorder, autism spectrum disorder, depression, obsessive-compulsive disorder, bipolar disorder [1]} using machine learning (ML) techniques. The datasets used for the detection of the diseases contain linguistic, auditory, and visual information. Adapted from real life, using the modalities together with fusion techniques improves the results as explained in Chapter .
| i |
cae9c630-e4ac-4bc4-85a2-214978176e0c | Assessment of mental health disorders using machine learning methods has been an active research area. Many researchers are working on recognizing mental health disorders varying from depression, Alzheimer’s disease, anxiety to bipolar disorder. The interdisciplinary research between psychiatrists and computer scientists helps to create new datasets and bringing insights from the medical domain to artificial intelligence.
| w |
3f3d47a9-0011-484d-bccc-7d32c4593500 | In this chapter, we introduce the features used in audio, textual, and visual modalities, preprocessing, feature selection methods applied to the dataset. After that, we explain the ELM algorithm used as a classification method, cross-validation technique used to evaluate the results, and modality fusion methods applied to improve the unimodal results.
| m |
ae647d7b-e1fd-42fe-ad88-53a0686f9d60 | During this thesis, we worked on the classification of bipolar disorder episodes (mania, hypomania, depression) using the BD dataset that contains video recordings of the bipolar disorder patients while they are interviewed by their psychiatrists. During the interviews, the patients perform seven different tasks. The tasks are designed in a way that they elicit both positive and negative emotions in the patients, and some tasks are emotionally neutral.
| d |
e799b2ad-acf7-41c0-a755-187eb7579bb1 | We showed that multimodality improves the generalizability of the classification of bipolar disorder. The information coming from acoustic, textual, and visual modalities complement each other and improve the performance of the unimodal systems. The results suggests that using all three modalities together gives the best performance, however a fusion model of the linguistic, and acoustic modalities still perform well while requiring less information.
| d |
e12eb7c5-29f1-4029-9585-f893f486341a | As a classification algorithm, we use fusion of weighted and unweighted ELMs. ELM was a good fit for this problem, since it is a 2-level neural and prone to overfitting. The data imbalance creates a need for a weighted model, however weighted ELM mostly favor the minority class. So using the fusion of weighted and unweighted ELMs, the optimum point is found.
| d |
644700f7-5da7-4590-bad1-8ed5728efe1b | The best performing model is achieved using eGEMAPS10, LIWC, and FAU features using the fusion of weighted and unweighted kernel ELMS, and fused using majority voting as a late fusion process. We achieve 64.8% test set UAR on this configuration, which is the best result achieved on the BD dataset as can be seen in Figure REF . The results suggest that benefiting from all three modalities is useful, since the first 13 best performing model is achieved on the fusion models of three modalities. However, the 14th highest score on the Table
REF uses only linguistic and acoustic modalities. So, it is possible to use only audio recordings of the patients, like phone recordings and achieve promising results from the fusion of linguistic and acoustic modalities. Besides, the MM1 scores on Table REF shows that, fusion of modalities increase the maximum scores achieved on a single modality in all the configurations.
| d |
258da887-57b6-4668-a9f7-691395eb7e86 | eGEMAPS is a commonly used minimalistic acoustic feature set. So we used it for the audio classification, and in the fusion experiments. Besides, we summarized eGEMAPS LLDs with the 10 functionals presented in [1]}. We achieved a better performance using eGEMAPS10 feature set, which shows that eGEMAPS LLDs can give better results when summarized with different functionals. eGEMAPS, and eGEMAPS10 feature sets contain 88, and 230 features respectively. So, a higher feature size may help finding better features that generalize better to the dataset.
<FIGURE> | d |
579163b4-b6bf-4010-b0b2-cc5eb1ae82d4 | These results are still not high enough to use in a real-world application as a decision system. One of the main difficulties was the small size of the BD corpus. There are 25, 38, and 41 clips in the dataset for the remission, hypomania, and mania classes respectively, which is not enough to generalize with a high certainty. The dataset is collected in a real-life scenario. So there were some noises, and in some cases the clinician explains things about the questions to the patients, so her voice can be heard as well. These issues are expected to be present if a real-life application is created, so the natural recording setup makes this database valuable. Another difficulty stems from missing information in some clips, where patients do not answer some of the questions. In one of the test case clips, the patient does not answer any question at all. This can be used as a feature as well. However in our method, it caused a poor performance.
| d |
0fbed13e-eef2-4764-9f77-131976893f47 | Besides the clip level evaluation, we look for the effect of the tasks separately, and by grouping the same emotion eliciting tasks during the classification. Since some tasks are not performed in every clip, the number of clips per task are different. To be able to compare the results among the task groups and the entire clips results, we assign the middle class label to the missing clips. Since the dataset size is already small, this distorted the final scores somewhat. Still, from the task level experiments we can see that emotion eliciting tasks are more useful in the classification of BD for all three modalities, as expected. In order to increase the dataset size, we also used the task groups as separate data points and performed classification. However, the results were not better than the entire clip level results, which shows that the information obtained from longer clips is necessary for learning.
| d |
5fa8e7bc-f488-4c83-bdba-5e584336fe3a | Our final best performing model contains information from three different modalities, and each modality is represented using feature vectors with various sizes, which causes poor explainability of the model. It is especially important to create explainable models in medical domain.
As a further study, the explainability of the system can be investigated, which also gives insights to the psychiatrists about the features used in the classification, and the best performing ones can be adapted in their decision making progresses.
| d |
e349de23-f7a4-4f0a-b4ca-1d2787a3ca63 | Deep learning has emerged as a powerful tool for many industrial and scientific applications.
However, deep learning requires large centralized datasets, whose collection can be intrusive, for training.
The finalized model can either be deployed on a server or edge devices.
Federated learning circumvents this problem by shifting compute responsibility onto clients, letting a central server aggregate the resulting artifacts.
In FedAvg, edge devices train on locally available data, while the server averages the finished models [1]}.
This is repeated for multiple rounds of communication.
The server never has possession of any potentially sensitive data.
| i |
ce0478f1-e907-4432-a48d-af90aa633f35 | When data is independently and identically distributed (IID), federated learning algorithms converge rapidly.
FedAvg takes as a few as 18 communication rounds to reach 99% accuracy for 100 device federated MNIST [1]}.
When the client devices are statistically heterogeneous, learning a single global model becomes very difficult [2]}.
In such cases, it is more natural to learn personalized models.
Still, there are circumstances where a single global model is desired.
For example, different online businesses might want a model capable of flagging a wide spectrum of fraudulent schemes.
since fraudsters are often repeat offenders, scams attempted on one platform may be reused on others.
| i |
eb5bb278-8016-4107-9bf3-f8f8acb4db17 | Techniques for faster federated learning on non-IID data range from the simple to the complex.
On the simple end, Momentum Federated Learning averages the momenta of different devices into a global momentum which is distributed at the start of each round [1]}.
This enables clients to use momentum gradient descent as their optimizer, provably increasing the rate of convergence.
On the complex end, SCAFFOLD uses the gradient of the global model as a control variate to address drifting among client updates [2]}.
Notably these two methods double the amount of information submitted by devices to the server.
| i |
aab22439-f975-4110-a62e-82b21f734c88 | To deal with the communication and scalability challenges introduced by above methods, efforts has been made to reduce the amount of rounds required for server-client communication [1]}.
FedPAQ has made an initial effort [2]} to periodically average and quantize the client models before making the server update.
Then, periodic averaging for both server and client models followed up quickly [3]}.
| i |
fa8703b9-1117-475f-ae8d-fb939bbabdb8 | In this paper, we take a different approach, using server averaging to accelerate convergence.
We justify the technique using heuristic arguments and experimentally show that it reaches a given test accuracy faster than FedAvg.
Additionally, we propose decay epochs for reducing client computation while maintaining non-IID performance.
| i |
afbc9576-f872-454f-a6b2-9e602eb5fd11 | The history of stochastic gradient methods dates back to 1951, and is usually mentioned as Robbins-Monro process [1]}.
One technique that has historically been used to improve SGD convergence is iterate averaging [2]}, [3]}, [4]}, also often referred to as Polyak-Rupert averaging.
Recently, the stability of an averaging scheme that considers a non-uniform average of the iterates is discussed [5]}.
A weighted average is applied which decays in a geometric manner.
Neu et. al., show that the same regularizing effect can be done for SGD with the linear least-squares regression problem.
| w |
d42322ba-5a02-48d3-a75a-c3df36c7fc52 | Federated learning techniques heavily rely on above mentioned averaging schemes.
FedAvg is the most popular aggregation method that averages parameters of local models element-wise.
There exists two major branches for improving FedAvg [1]}.
One is lead by FedProx [2]} that applies a proximal term to the local lost function of each client and thresholding the local updates.
Another approach is to proposes different averaging schemes to either save the communication cost or to improve the performance [3]}.
| w |
3af8883d-25e6-4ddf-87d4-02c16284a622 | Safa et. al. explore iterate averaging in the context of block-cyclic SGD [1]}.
Most federated learning algorithms assume that clients are chosen uniformly.
In practice, devices conduct local training only when idle, with devices falling into blocks according to their timezone.
More formally, we want to minimize
\(\operatorname{\mathbb {E}}_{z \sim \mathcal {D}} f(w, z) \text{ where } \mathcal {D} = \sum _{i=1}^{m} \mathcal {D}_i\)
| w |
59c00e2d-549f-499e-ad1c-358e07e5f086 | while sampling \(n\) points from \(\mathcal {D}_1, \dots , \mathcal {D}_m\) in order for \(K\) cycles.
In this block-cyclic setting, SGD is worse by a factor of \(\sqrt{mn/K}\) .
However, learning personalized models for each block using Averaged SGD [1]}—taking the average of all SGD iterate as the final model parameters—provides the same performance guarantees as SGD with IID sampling.
| w |
71a05e05-e4d3-457c-9904-d06d7a14eb68 | Stochastic Weight Averaging (SWA) applies Averaged SGD to deep learning [1]}.
Izmailov et. al. note that SGD generally converges to points near the boundary of a wide flat region and that optima width has been conjectured to correlate with generalization.
The average of the SGD iterates then lies at the the center of this flat region.
Moreover, to ensure coverage of this flat region, SWA uses a cyclic or a high constant learning rate.
This algorithm has the benefit of low computational overhead—only the moving average needs to be recorded–and simplicity.
SWA does not improve the rate of convergence compared to SGD.
In fact, SWA converges to worse but better generalizing optima than SGD.
| w |
3436b808-2afd-400b-b6bd-e1bd450df09e | This paper improves upon an existing federated learning algorithm by performing periodic server-side averaging.
The proposed adaptation of FedAvg has three major benefits: (1) it uses iterate averaging for accelerated convergence, (2) it learns a better generalizing optima than SGD, (3) the effectiveness of FL is increased due to recycling of previously participating clients.
We empirically show that server averaging takes fewer rounds than FedAvg to a desired accuracy level.
In addition, we propose epoch decay to lower the computation costs for each client.
Epoch decay limits the number of updates, similar to learning rate decay for SGD, and reduces the amount of computation by up to 40%.
| d |
3bc8f89c-0854-4158-813c-809894baa58a | In the future, we wish to extend the server averaging to both various neural network types (i.e. attention, LSTM, etc.) and layer-wise building blocks (i.e. batch normalization layers, etc.).
In addition, we wish to investigate the performance of epoch decay paired with state-of-the-art update methods such as match averaging [1]}.
| d |
3a6811f9-90ac-47c5-9b37-07f95925330c | Contour is one of the most important object descriptors, along with texture and color. The boundary of an object in an image is encoded in contour description, which is useful in various applications, such as image retrieval [1]}, [2]}, [3]}, recognition [4]}, [5]}, [6]}, and segmentation [7]}, [8]}, [9]}, [10]}, [11]}. It is desirable to represent object boundaries compactly, as well as faithfully, but it is challenging to design such contour descriptors due to the diversity and complexity of object shapes.
| i |
6782fdc5-35b5-4b87-8d30-ef3704395e2a | Early contour descriptors were developed mainly for image retrieval [1]}, [2]}, [3]}, [4]}. An object contour can be simply represented based on the area, circularity, and/or eccentricity of the object [5]}. For more precise description, there are several approaches, including shape signature [6]}, [7]}, [8]}, structural analysis [9]}, [10]}, [11]}, [12]}, [13]}, spectral analysis [2]}, [3]}, and curvature scale space (CSS) [1]}, [17]}.
| i |
29a190e8-5669-4900-951c-759d211e625e | Recently, contour descriptors have been incorporated into deep-learning-based object detection, tracking, and segmentation systems. In [1]}, bounding boxes are replaced by polygons to enclose objects more tightly. In [2]}, ellipse fitting is done to produce a rotated box of a target object to be tracked. For instance segmentation, contour-based techniques have been proposed that represent pixelwise masks by contour descriptors based on shape signature [3]} or polynomial fitting [4]}. Even though these descriptors can localize an object effectively, they may fail to reconstruct the object boundary faithfully. Also, they consider the structural information of an individual object only, without exploiting the shape correlation between different objects.
<FIGURE> | i |
258c9349-2eae-453e-aeba-bcc3e101a13d | In this paper, we propose novel contour descriptors, called eigencontours, based on low-rank approximation. First, we construct a contour matrix containing all object boundaries in a training set. Second, we decompose the contour matrix into eigencontours, based on the best rank-\(M\) approximation of singular value decomposition (SVD) [1]}. Then, each contour is represented by a linear combination of the \(M\) eigencontours, as illustrated in Figure REF . Also, we incorporate the eigencontours into an instance segmentation framework. Experimental results demonstrate that the proposed eigencontours can represent object boundaries more effectively and more efficiently than the existing contour descriptors [2]}, [3]}. Moreover, utilizing the existing framework of YOLOv3 [4]}, the proposed algorithm yields promising instance segmentation performances on various datasets — KINS [5]}, SBD [6]}, and COCO2017 [7]}.
| i |
ab1985af-9e02-4e82-8632-0ee559db99f2 |
We propose the notion of eigencontours — data-driven contour descriptors based on SVD — to represent object boundaries as faithfully as possible with a limited number of coefficients.
The proposed algorithm can represent object boundaries more effectively and more efficiently than the existing contour descriptors.
The proposed algorithm outperforms conventional contour-based techniques in instance segmentation.
| i |
2ac7b224-3e3b-4914-bf7a-07ee5c450339 | The goal of contour description is to represent the boundary of an object in an image compactly and faithfully. Simple contour descriptors are based on the area, circularity, and/or eccentricity of an object [1]}, and basic geometric shapes, such as rectangles and ellipses, can be also used. However, these simple descriptors cannot preserve the original shape of an object faithfully [2]}, [3]}.
For more sophisticated description, there are four types of approaches: shape signature [4]}, [5]}, [6]}, structural analysis [7]}, [8]}, [9]}, [10]}, spectral analysis [11]}, [12]}, and CSS [13]}, [14]}.
First, a shape signature is a one-dimensional function derived from the boundary coordinates of an object. For example, a polar coordinate system is set up with respect to the centroid of an object. Then, the object boundary is represented by the \((r, \theta )\) graph, called the centroidal profile [4]}. Also, an object shape can be represented by the angle between the tangent vector at each contour point and the \(x\) -axis [5]}. Second, structural methods divide an object boundary into segments and approximate each segment to encode the whole boundary. In [7]}, the boundary is represented by a sequence of unit vectors with a few possible directions. In [8]}, polygonal approximation is performed to globally minimize the errors from an approximated polygon to the original boundary. In [9]}, segments of an object contour are represented by cubic polynomials. Third, in spectral methods, boundary coordinates are transformed to a spectral domain. In [11]}, a wavelet transform is used for contour description.
In [12]}, the Fourier descriptors are derived from the Fourier series of centroidal profiles. Fourth, in CSS [13]}, a boundary is smoothed by a Gaussian filter with a varying standard deviation. Then, the boundary is represented by the curvature zero-crossing points of the smoothed curve at each standard deviation.
<FIGURE> | w |
20ac1d8a-251c-477a-9798-6480b8a96975 | Recently, attempts have been made to improve the performances of deep-learning-based vision systems. In [1]}, a bounding box for object detection is replaced by an octagon to enclose an object more tightly via polygonal approximation. In [2]}, a rotated box for a target object is determined based on ellipse fitting, in order to cope with object deformation in a visual tracking system. For instance segmentation, contour-based approaches [3]}, [4]} have been developed, which reformulate the pixelwise classification task as the boundary regression of an object. To this end, these methods encode segmentation masks into contour descriptors. In [4]}, centroidal profiles are used to describe object boundaries.
In [3]}, each segment of a boundary is represented by a few coefficients based on polynomial fitting. Although these methods are computationally efficient for localizing object instances, they often fail to reconstruct the boundaries of the object shapes faithfully.
| w |
5a8c8f29-1f54-4289-a6a3-1c9880e45646 | The proposed algorithm aims to represent an object boundary as faithfully as possible by employing as few coefficients as possible. To this end, we develop eigencontours based on the best low-rank approximation property of SVD.
| w |
6f1d02ce-f89b-4930-a418-867fa8daf441 | Dimension of eigencontour space (\(M\) ):
Table REF lists the AUC-\(\cal {F}\) performances of the proposed algorithm on the SBD validation dataset according to the dimension, \(M\) , of the eigencontour space. At \(M=10\) , the proposed algorithm yields poor scores, since object boundaries are too simplified and not sufficiently accurate. At \(M=20\) , it provides the best results. At \(M=30\) , it yields similarly good results. However, at \(M=40\) , the performances are degraded further, which indicates that a high-dimensional space does not always lead to better results. It is more challenging to regress more variables reliably. There is a tradeoff between accuracy and reliability. In this test, \(M=20\) achieves a good tradeoff.
| m |
1ebd7eb0-42ed-44ec-a646-4dba1cd36855 | Categorical eigencontour space:
The proposed eigencontours are data-driven descriptors, which depend on the distribution of object contours in a dataset. Thus, different eigencontours are obtained for different data. Let us consider two options for constructing eigencontour spaces: categorial construction and universal construction.
In the categorial construction, eigencontours are determined for each category in a dataset. In the universal construction, they are determined for all instances in all categories.
| m |
389c1547-abc7-4ef9-a5d2-781bb9c944c4 | For the two options, \(\cal {F}\) score curves are presented according to the dimension \(M\) in the supplemental document. Table REF compares the area under curve performances of the \(\cal {F}\) curves up to \(M=18\) . The categorial construction provides better performances than the universal construction, because it considers similar shapes in the same category only. In COCO2017, the gap between the two options is the smallest. This is because some object shapes are not properly represented due to occlusions and thus COCO2017 objects exhibit low intra-category correlation. In contrast, in KINS, whole contours are well represented because occluded regions are also annotated. Hence, the gap between the two options is the largest.
| m |
7cd6bac9-81a1-482a-9c69-605195030d7f | Limitations:
The proposed eigencontours represent typical contour patterns in a dataset. Thus, if object contour patterns differ among datasets, the eigencontours for a dataset may be effective for that particular dataset only. To assess the dependency of eigencontours on a dataset, we conduct cross-validation tests between datasets in the supplemental document.
<TABLE><TABLE> | m |
d5854744-09f5-4bf0-bb33-ee4ed71bd3b7 | We proposed novel contour descriptors, called eigencontours, based on low-rank approximation. First, we constructed a contour matrix containing all contours in a training set. Second, we approximated the contour matrix, by performing the best rank-\(M\) approximation. Third, we represent an object boundary by a linear combination of the \(M\) eigencontours. Experimental results demonstrated that the proposed eigencontours can represent object boundaries more effectively and more faithfully than the existing methods. Moreover, the proposed algorithm yields meaningful instance segmentation performances.
| d |
ff7c1911-a348-434c-ae84-e2924bb12d28 | Scientists increasingly use machine learning (ML) in their daily work. This development is not limited to natural sciences like ecology or neuroscience , but also extends to social sciences such as psychology and archaeology .
| i |
7151e7c7-ce93-46b8-97cc-2571a79df978 | In particular, when building predictive models for problems with complex data structures, ML outcompetes classical statistical models in both performance and convenience. Impressive recent examples of successful prediction models in science include the automated particle tracking at CERN , or DeepMind's AlphaFold, which has essentially solved the protein structure prediction challenge CASP . In such examples, some see a paradigm shift towards theory-free science that “lets the data speak” , , , . Indeed, prediction is one of the core aims of science , , but so are, as philosophers of science and statisticians emphasize, explanation and knowledge generation , , . Focusing exclusively on prediction may therefore represent a historical step back , .
| i |
004cc382-3ba7-4af2-b4ff-26ba95978428 | What hinders scientists from using ML models to gain real-world insights is the model complexity and the unclear connection between the model and the described phenomenon — the so-called opacity problem , . Interpretable machine learning (IML, also called XAI for eXplainable artificial intelligence) aims to solve the opacity problem by analyzing model elements or inspecting model properties . Various expectations are put into IML by different stakeholders with diverse goals , including scientists , ML engineers , regulatory bodies , and laypeople . Due to this diversity of goals, stakeholders, and requirements, IML has been criticized for lacking a well-defined goal .
| i |
f1f5210a-6d48-4dfa-8b17-32b06daf335f | Nevertheless, scientists increasingly use IML techniques in their research.e.g. for predicting personality traits from smartphone usage , forecasting crop yield , , or analyzing seasonal precipitation forecasts Although researchers are aware that their IML analysis remains just a model description, it is often implied that the explanations, associations, or effects found also extend to the corresponding real-world properties. Unfortunately, drawing inferences with IML can currently be epistemically problematic because the interpretation techniques are not defined for that purpose . In particular, the difference between model-only versus phenomenon explanations is often unclear , and a theory to quantify the uncertainty of interpretations is lacking .
| i |
7d25e83e-cbfb-47e4-8d54-426ebdd8b9b1 | In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple \( (h, t, r) \) denoting the fact that relation \( r \) exists between head entity \( h \) and tail entity \( t \) . This can also be formalized as a labeled directed multigraph where each triple \( (h, t, r) \) represents a directed edge from node \( h \) to node \( t \) with label \( r \) . Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite).
| i |
be8647fa-47ab-4abc-9692-04fdb62a7f74 | Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches [1]} to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx [2]} and CP\( _h \) [3]}. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications [4]}. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration.
| i |
38cf6de4-fd73-40f8-b931-114da0e747e1 | For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP\( _h \) [1]} in comparison to the popular word embedding model word2vec skipgram [2]} to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets.
| i |
4bc58645-1d7f-431c-a254-22ba0bdf2eaf | Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries.
| i |
388310ad-34a1-4321-b88d-de5d4f83751a | In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP\( _h \) model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space.
| d |
764a6cda-e150-4341-971b-9e971c906cee | This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation.
| d |
82e5ee19-669b-425c-87bd-1e6f8f7e7b70 |
In recent years, machine learning algorithms have been increasingly used to inform decisions with far-reaching consequences (e.g. whether to release someone from prison or grant them a loan), raising concerns about their compliance with laws, regulations, societal norms, and ethical values. Specifically, machine learning algorithms have been found to discriminate against certain “sensitive” demographic groups (e.g. racial minorities), prompting a profusion of algorithmic fairness research [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}. Algorithmic fairness literature aims to develop fair machine learning algorithms that output non-discriminatory predictions.
| i |
274c01a8-01f4-4118-9db4-166c0b5c7d41 |
Fair learning algorithms typically need access to the sensitive data in order to ensure that the trained model is non-discriminatory.
However, consumer privacy laws (such as the E.U. General Data Protection Regulation) restrict the use of sensitive demographic data in algorithmic decision-making. These two requirements–fair algorithms trained with private data–presents a quandary: how can we train a model to be fair to a certain demographic if we don't even know which of our training examples belong to that group?
| i |
5d0af241-6a11-4aa4-a538-97564180275c |
The works of [1]}, [2]} proposed a solution to this quandary using secure multi-party computation (MPC), which allows the learner to train a fair model without directly accessing the sensitive attributes.
Unfortunately, as [3]} observed, MPC does not prevent the trained model from leaking sensitive data. For example, with MPC, the output of the trained model could be used to infer the race of an individual in the training data set [4]}, [5]}, [6]}, [7]}.
To prevent such leaks, [3]} argued for the use of differential privacy [9]} in fair learning. Differential privacy (DP) provides a strong guarantee that no company (or adversary) can learn much more about any individual than they could have learned had that individual's data never been used.
| i |
b7f3d840-9d11-446b-883d-9c0863e156a3 |
Since [1]}, several follow-up works have
proposed alternate approaches to DP fair learning [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. As shown in fig: related work table,
each of these approaches suffers from at least two critical shortcomings.
In particular, none of these methods have convergence guarantees when mini-batches of data are used in training. In training large-scale models, memory and efficiency constraints require the use of small minibatches in each iteration of training (i.e. stochastic optimization). Thus, existing DP fair learning methods cannot be used in such settings since they require computations on the full training data set in every iteration. See app: related work for a more comprehensive discussion of related work.
| i |
9a11b4e1-1524-4c43-9243-1799245cd66a | Our Contributions: In this work, we propose a novel algorithmic framework for DP fair learning. Our approach builds on the non-private fair learning method of [1]}. We consider a regularized empirical risk minimization (ERM) problem where the regularizer penalizes fairness violations, as measured by the Exponential Rényi Mutual Information.
Using a result from [1]}, we reformulate this fair ERM problem as a min-max optimization problem. Then, we use an efficient differentially private variation of stochastic gradient descent-ascent (DP-SGDA) to solve this fair ERM min-max objective.
The main features of our algorithm are:
| i |
7a3e3a77-3253-43e7-8c7e-f3bf397601a2 |
Guaranteed convergence for any privacy and fairness level, even when mini-batches of data are used in each iteration of training (i.e. stochastic optimization setting). As discussed, stochastic optimization is essential in large-scale machine learning scenarios. Our algorithm is the first stochastic DP fair learning method with provable convergence.
Flexibility to handle non-binary classification with multiple (non-binary) sensitive attributes (e.g. race and gender) under different fairness notions such as demographic parity or equalized odds. In each of these cases, our algorithm is guaranteed to converge.
| i |
9e737725-9354-43db-b300-bcc4d256d71c | Empirically, we show that our method outperforms the previous state-of-the-art methods in terms of fairness vs. accuracy trade-off across all privacy levels. Moreover, our algorithm is capable of training with mini-batch updates and can handle non-binary target and non-binary sensitive attributes. By contrast, existing DP fairness algorithms could not converge in our stochastic/non-binary experiment.
| i |
ec259b81-6d2f-4e81-af8e-e1a77f3cafd2 |
A byproduct of our algorithmic developments and analyses is the first DP convergent algorithm for nonconvex min-max optimization: namely, we provide an upper bound on the stationarity gap of DP-SGDA for solving problems of the form \(\min _{\theta } \max _{W} F(\theta , W)\) , where \(F(\cdot , W)\) is non-convex. We expect this result to be of independent interest to the DP optimization community. Prior works that provide convergence results for DP min-max problems have assumed that \(F(\cdot , W)\) is either (strongly) convex [1]}, [2]} or satisfies a generalization of strong convexity known as the Polyak-Łojasiewicz (PL) condition [3]}.
<FIGURE> | i |
8264eb8d-21ec-406d-a29c-0bc4a813b2f9 |
In this section, we evaluate the performance of our proposed approach (DP-FERMI) in terms of the fairness violation vs. test error for different privacy levels. We present our results in two parts: In Section REF , we assess the performance of our method in training logistic regression models on several benchmark tabular datasets. Since this is a standard setup that existing DP fairness algorithms can handle, we are able to compare our method against the state-of-the-art baselines. We find that DP-FERMI consistently outperforms all state-of-the-art baselines across all data sets and all privacy levels. These observations hold for both demographic parity and equalized odds fairness notions.
In Section REF , we showcase the scalability of DP-FERMI by using it to train a deep convolutional neural network for classification on a large image dataset. In app: experiments, we give detailed descriptions of the data sets, experimental setups and training procedure, along with additional results.
| m |
95322e5f-18a1-4653-93a5-c0963a4d6e4e |
Motivated by pressing legal, ethical, and social considerations, we studied the challenging problem of learning fair models with differentially private demographic data. We observed that existing works suffer from a few crucial limitations that render their approaches impractical for large-scale problems. Specifically, existing approaches require full batches of data in each iteration (and/or exponential runtime) in order to provide convergence/accuracy guarantees. We addressed these limitations by deriving a DP stochastic optimization algorithm for fair learning, and rigorously proved the convergence of the proposed method. Our convergence guarantee holds even for non-binary classification (with any hypothesis class, even infinite VC dimension, c.f. [1]}) with multiple sensitive attributes and access to random minibatches of data in each iteration. Finally, we evaluated our method in extensive numerical experiments and found that it significantly outperforms the previous state-of-the-art models, in terms of fairness-accuracy tradeoff. Further, our method provided stable results in a larger scale experiment with small batch size and non-binary targets/sensitive attributes. The potential societal impacts and limitations of our work are discussed in app: societal impacts.
| d |
298d9497-25cd-47fc-b2f7-785d95521f43 | The study of differentially private fair learning algorithms was initiated by [1]}. [1]} considered equalized odds and proposed two DP algorithms: 1) an \(\epsilon \) -DP post-processing approach derived from [3]}; and 2) an \((\epsilon , \delta )\) -DP in-processing approach based on [4]}. The major drawback of their post-processing approach is the unrealistic requirement that the algorithm have access to the sensitive attributes at test time, which [1]} admits “isn't feasible (or legal) in certain applications.” Additionally, post-processing approaches are known to suffer from inferior fairness-accuracy tradeoffs compared with in-processing methods. While the in-processing method of [1]} does not require access to sensitive attributes at test time, it comes with a different set of disadvantages: 1) it is limited to binary classification; 2) its theoretical performance guarantees require the use of the computationally inefficient (i.e. exponential-time) exponential mechanism [7]}; 3) its theoretical performance guarantees require computations on the full training set and do not permit mini-batch implementations; 4) it requires the hypothesis class \(\mathcal {H}\) to have finite VC dimension.
In this work, we propose the first algorithm that overcomes all of these pitfalls: our algorithm is amenable to multi-way classification with multiple sensitive attributes, computationally efficient, and comes with convergence guarantees that hold even when mini-batches of \(m < n\) samples are used in each iteration of training, and even when \(\text{VC}(\mathcal {H}) = \infty \) . Furthermore, our framework is flexible enough to accommodate many notions of group fairness besides equalized odds (e.g. demographic parity, accuracy parity).
| w |
bf2aabcd-99b3-47a4-bbd9-21252209eca5 | Following [1]}, several works have proposed other DP fair learning algorithms. None of these works have managed to simultaneously address all the shortcomings of the method of [1]}. The work of [3]} proposed DP and fair binary logistic regression, but did not provide any theoretical convergence/performance guarantees. The work of [4]} combined aspects of both [5]} and [6]} in a two-step locally differentially private fairness algorithm. Their approach is limited to binary classification.
Moreover, their algorithm requires \(n/2\) samples in each iteration (of their in-processing step), making it impractical for large-scale problems. More recently, [7]} devised another DP in-processing method based on lagrange duality, which covers non-binary classification problems. In a subsequent work, [8]} studied the effect of DP on accuracy parity in ERM, and proposed using a regularizer to promote fairness. Finally, [9]} provided a semi-supervised fair “Private Aggregation of Teacher Ensembles” framework. A shortcoming of each of these three most recent works is their lack of theoretical convergence or accuracy guarantees. In another vein, some works have observed the disparate impact of privacy constraints on demographic subgroups [10]}, [11]}.
| w |
52f44d2d-dce4-49a1-ac6d-bbe03471f633 | Nam dui ligula, fringilla a, euismod sodales,
sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus
libero, pretium at, lobortis vitae, ultricies et, tellus. Donec
aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna,
vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit
mollis. Suspendisse ut massa. Cras nec ante. Pellentesque a nulla.
Cum sociis natoque penatibus et magnis dis parturient montes,
nascetur ridiculus mus. Aliquam tincidunt urna. Nulla ullamcorper
vestibulum turpis. Pellentesque cursus luctus mauris.
| i |
67c47121-3a1c-4195-9489-4cb56370daa8 | Nulla malesuada porttitor diam. Donec felis
erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus
viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus
adipiscing semper elit. Proin fermentum massa ac quam. Sed diam
turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas
lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a,
ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat
lorem. Sed lacinia nulla vitae enim. Pellentesque tincidunt purus
vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec
bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi.
Nam vulputate metus eu enim. Vestibulum pellentesque felis eu
massa.
| i |
580035ca-35ab-4e20-90a2-4c76e1a94bdf | Concurrent with steady progress towards improving the accuracy and efficiency of 3D object detector algorithms [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, LiDAR sensor hardware has been improving in maximum range and fidelity, in order to meet the needs of safe, high speed driving. Some of the latest commercial LiDARs can sense up to 250m [12]} and 300m [13]} in all directions around the vehicle. This large volume coverage places strong demands for efficient and accurate 3D detection methods.
<FIGURE> | i |
b8e291bf-5052-45b9-9ebc-b15fd15d9e8e | Grid based methods [1]}, [2]}, [3]}, [4]}, [5]} divide the 3D space into voxels or pillars, each of these being optionally encoded using PointNet [6]}. Dense convolutions are applied on the grid to extract features. This approach is inefficient for large grids which are needed for long range sensing or small object detection. Sparse convolutions [7]} scale better to large detection ranges but are usually slow due to the inefficiencies of applying to all points.
Range images are native, dense representations, suitable for processing point clouds captured by a single LiDAR. Range image based methods [8]}, [9]} perform convolutions directly over the range in order to extract point cloud features.
Such models scale well with distance, but tend to perform less well in occlusion handling, accurate object localization, and for size estimation. A second stage, refining a set of initial candidate detections, can help mitigate some of these quality issues, at the expense of significant computational cost.
| i |
Dataset Card for unarXive IMRaD classification
Dataset Summary
The unarXive IMRaD classification dataset contains 530k paragraphs from computer science papers and the IMRaD section they originate from. The paragraphs are derived from unarXive.
The dataset can be used as follows.
from datasets import load_dataset
imrad_data = load_dataset('saier/unarXive_imrad_clf')
imrad_data = imrad_data.class_encode_column('label') # assign target label column
imrad_data = imrad_data.remove_columns('_id') # remove sample ID column
Dataset Structure
Data Instances
Each data instance contains the paragraph’s text as well as one of the labels ('i', 'm', 'r', 'd', 'w' — for Introduction, Methods, Results, Discussion and Related Work). An example is shown below.
{'_id': '789f68e7-a1cc-4072-b07d-ecffc3e7ca38',
'label': 'm',
'text': 'To link the mentions encoded by BERT to the KGE entities, we define '
'an entity linking loss as cross-entropy between self-supervised '
'entity labels and similarities obtained from the linker in KGE '
'space:\n'
'\\(\\mathcal {L}_{EL}=\\sum -\\log \\dfrac{\\exp (h_m^{proj}\\cdot '
'\\textbf {e})}{\\sum _{\\textbf {e}_j\\in \\mathcal {E}} \\exp '
'(h_m^{proj}\\cdot \\textbf {e}_j)}\\) \n'}
Data Splits
The data is split into training, development, and testing data as follows.
- Training: 520,053 instances
- Development: 5000 instances
- Testing: 5001 instances
Dataset Creation
Source Data
The paragraph texts are extracted from the data set unarXive.
Who are the source language producers?
The paragraphs were written by the authors of the arXiv papers. In file license_info.jsonl
author and text licensing information can be found for all samples, An example is shown below.
{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
'license': 'http://creativecommons.org/licenses/by/4.0/',
'paper_arxiv_id': '2011.09852',
'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8',
'18dc073e-a48e-488e-b34c-e5fc3cb8a4ca',
'0c2e89b3-d863-4bc2-9e11-8f6c48d867cb',
'd85e46cf-b11d-49b6-801b-089aa2dd037d',
'92915cea-17ab-4a98-aad2-417f6cdd53d2',
'e88cb422-47b7-4f69-9b0b-fbddf8140d98',
'4f5094a4-0e6e-46ae-a34d-e15ce0b9803c',
'59003494-096f-4a7c-ad65-342b74eed561',
'6a99b3f5-217e-4d3d-a770-693483ef8670']}
Annotations
Class labels were automatically determined (see implementation).
Considerations for Using the Data
Discussion and Biases
Because only paragraphs unambiguously assignable to one of the IMRaD classeswere used, a certain selection bias is to be expected in the data.
Other Known Limitations
Depending on authors’ writing styles as well LaTeX processing quirks, paragraphs can vary in length a significantly.
Additional Information
Licensing information
The dataset is released under the Creative Commons Attribution-ShareAlike 4.0.
Citation Information
@inproceedings{Saier2023unarXive,
author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}},
booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
year = {2023},
series = {JCDL '23}
}
- Downloads last month
- 203