Logic in Computer Science
- [1] arXiv:2405.19968 [pdf, ps, html, other]
-
Title: A Dynamic Logic for Information Evaluation in IntelligenceSubjects: Logic in Computer Science (cs.LO)
In the field of human intelligence, officers use an alphanumeric scale, known as the Admiralty System, to rate the credibility of messages and the reliability of their sources (NATO AJP-2.1, 2016). During this evaluation, they are expected to estimate the credibility and reliability dimensions independently of each other (NATO STANAG, 2003). However, empirical results show that officers perceive these dimensions as strongly correlated (Baker et al., 1968). More precisely, they consider credibility as playing the leading role over reliability, the importance of which is only secondary (Samet, 1975). In this paper, we present a formal evaluative procedure, called L(intel), in line with these findings. We adapt dynamic belief revision to make credibility the main dimension of evaluation and introduce dynamic operators to update credibility ratings with the source's reliability. In addition to being empirically sound, we show that L(intel) provides an effective procedure to classify intelligence messages along the descriptive taxonomy presented in Icard (2023).
- [2] arXiv:2405.20057 [pdf, ps, other]
-
Title: A General Automata Model for First-Order Temporal Logics (Extended Version)Subjects: Logic in Computer Science (cs.LO)
First-order linear temporal logic (FOLTL) is a flexible and expressive formalism capable of naturally describing complex behaviors and properties. Although the logic is in general highly undecidable, the idea of using it as a specification language for the verification of complex infinite-state systems is appealing. However, a missing piece, which has proved to be an invaluable tool in dealing with other temporal logics, is an automaton model capable of capturing the logic. In this paper we address this issue, by defining and studying such a model, which we call first-order automaton. We define this very general class of automata, and the corresponding notion of regular first-order language, showing their closure under most common language-theoretic operations. We show how they can capture any FOLTL formula over any signature and theory, and provide sufficient conditions for the semi-decidability of their non-emptiness problem. Then, to show the usefulness of the formalism, we prove the decidability of monodic FOLTL, a classic result known in the literature, with a simpler and direct proof.
- [3] arXiv:2405.20083 [pdf, ps, other]
-
Title: Tachis: Higher-Order Separation Logic with Credits for Expected CostsPhilipp G. Haselwarter, Kwing Hei Li, Markus de Medeiros, Simon Oddershede Gregersen, Alejandro Aguirre, Joseph Tassarotti, Lars BirkedalSubjects: Logic in Computer Science (cs.LO); Programming Languages (cs.PL)
We present Tachis, a higher-order separation logic to reason about the expected cost of probabilistic programs. Inspired by the uses of time credits for reasoning about the running time of deterministic programs, we introduce a novel notion of probabilistic cost credit. Probabilistic cost credits are a separation logic resource that can be used to pay for the cost of operations in programs, and that can be distributed across all possible branches of sampling instructions according to their weight, thus enabling us to reason about expected cost. The representation of cost credits as separation logic resources gives Tachis a great deal of flexibility and expressivity. In particular, it permits reasoning about amortized expected cost by storing excess credits as potential into data structures to pay for future operations. Tachis further supports a range of cost models, including running time and entropy usage. We showcase the versatility of this approach by applying our techniques to prove upper bounds on the expected cost of a variety of probabilistic algorithms and data structures, including randomized quicksort, hash tables, and meldable heaps.
All of our results have been mechanized using Coq, Iris, and the Coquelicot real analysis library.
New submissions for Friday, 31 May 2024 (showing 3 of 3 entries )
- [4] arXiv:2405.19354 (cross-list from math.GM) [pdf, ps, other]
-
Title: Rotations of G\"odel algebras with modal operatorsSubjects: General Mathematics (math.GM); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)
The present paper is devoted to study the effect of connected and disconnected rotations of Gödel algebras with operators grounded on directly indecomposable structures. The structures resulting from this construction we will present are nilpotent minimum (with or without negation fixpoint, depending on whether the rotation is connected or disconnected) with special modal operators defined on a directly indecomposable algebra. In this paper we will present a (quasi-)equational definition of these latter structures. Our main results show that directly indecomposable nilpotent minimum algebras (with or without negation fixpoint) with modal operators are fully characterized as connected and disconnected rotations of directly indecomposable Gödel algebras endowed with modal operators.
- [5] arXiv:2405.19787 (cross-list from cs.CL) [pdf, ps, html, other]
-
Title: From Symbolic Tasks to Code Generation: Diversification Yields Better Task PerformersSubjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO); Programming Languages (cs.PL)
Instruction tuning -- tuning large language models on instruction-output pairs -- is a promising technique for making models better adapted to the real world. Yet, the key factors driving the model's capability to understand and follow instructions not seen during training remain under-explored. Our investigation begins with a series of synthetic experiments within the theoretical framework of a Turing-complete algorithm called Markov algorithm, which allows fine-grained control over the instruction-tuning data. Generalization and robustness with respect to the training distribution emerge once a diverse enough set of tasks is provided, even though very few examples are provided for each task. We extend these initial results to a real-world application scenario of code generation and find that a more diverse instruction set, extending beyond code-related tasks, improves the performance of code generation. Our observations suggest that a more diverse semantic space for instruction-tuning sets greatly improves the model's ability to follow instructions and perform tasks.
- [6] arXiv:2405.20263 (cross-list from math.CO) [pdf, ps, html, other]
-
Title: An algebraic proof of the graph orientation problem dichotomy for forbidden tournamentsComments: 17 pagesSubjects: Combinatorics (math.CO); Computational Complexity (cs.CC); Logic in Computer Science (cs.LO); Rings and Algebras (math.RA)
Using the theory of smooth approximations, we give a new and algebraic proof that for any set F of finite tournaments, the problem of orienting a finite graph whilst avoiding all members of F is either in P or NP-complete. We characterize both cases by algebraic conditions.
Cross submissions for Friday, 31 May 2024 (showing 3 of 3 entries )
- [7] arXiv:2210.05491 (replaced) [pdf, ps, html, other]
-
Title: Strong negation in the theory of computable functionals TCFSubjects: Logic (math.LO); Logic in Computer Science (cs.LO)
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and Gödel's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied from the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
- [8] arXiv:2306.03523 (replaced) [pdf, ps, html, other]
-
Title: Inconsistency Handling in Prioritized Databases with Universal Constraints: Complexity Analysis and Links with Active Integrity ConstraintsComments: This is an extended version of a paper appearing at the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023). This version fixes an error in Table 1 (case of subset repairs w.r.t. denial constraints). 28 pagesSubjects: Databases (cs.DB); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)
This paper revisits the problem of repairing and querying inconsistent databases equipped with universal constraints. We adopt symmetric difference repairs, in which both deletions and additions of facts can be used to restore consistency, and suppose that preferred repair actions are specified via a binary priority relation over (negated) facts. Our first contribution is to show how existing notions of optimal repairs, defined for simpler denial constraints and repairs solely based on fact deletion, can be suitably extended to our richer setting. We next study the computational properties of the resulting repair notions, in particular, the data complexity of repair checking and inconsistency-tolerant query answering. Finally, we clarify the relationship between optimal repairs of prioritized databases and repair notions introduced in the framework of active integrity constraints. In particular, we show that Pareto-optimal repairs in our setting correspond to founded, grounded and justified repairs w.r.t. the active integrity constraints obtained by translating the prioritized database. Our study also yields useful insights into the behavior of active integrity constraints.
- [9] arXiv:2402.05680 (replaced) [pdf, ps, html, other]
-
Title: Interpretable classifiers for tabular data via discretization and feature selectionComments: Changes in relation to version 1: more thorough and detailed experiments, general corrections and refinementsSubjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)
We introduce a method for computing immediately human interpretable yet accurate classifiers from tabular data. The classifiers obtained are short Boolean formulas, computed via first discretizing the original data and then using feature selection coupled with a very fast algorithm for producing the best possible Boolean classifier for the setting. We demonstrate the approach via 13 experiments, obtaining results with accuracies comparable to ones obtained via random forests, XGBoost, and existing results for the same datasets in the literature. In most cases, the accuracy of our method is in fact similar to that of the reference methods, even though the main objective of our study is the immediate interpretability of our classifiers. We also prove a new result on the probability that the classifier we obtain from real-life data corresponds to the ideally best classifier with respect to the background distribution the data comes from.