Abstracts

Invited Speakers

Complexity of quantifier processing
Jakub Szymanik

I will survey some results of our recent empirical studies on quantifier processing. The starting point of the research was the computational complexity analysis of sentence comprehension. The theory predicts different computational requirements for various quantifiers. I show how this theoretical predictions are reflected in reaction time and working memory experiments with healthy and schizofrenic subjects. Up to now most of our experimental material involved precise quantification but I will discuss how the obtained results could be related to vague quantification. Moreover, I will describe the work in progress in which we try to bridge the gap between precise and approximate quantification.

On the theory of intermediate quantifiers
Vilém Novák

Intermediate quantifiers are in natural language represented by expressions such as most, a large part of, a few, almost all, etc. A detailed semantic analysis of them has been provided by P. L. Peterson in his book [7].  It seems natural to study intermediate quantifiers within the frame of fuzzy  (many-valued) logics, because truth degrees of sentences involving them intuitively change continuously from falsity to truth depending on cardinalities of  sets of objects considered in their interpretation. Therefore, we will focus in  this contribution to their formal theory introduced by V. Novak in [5]. Its main goal was to provide a computational model of their meaning. The considered formal frame is higher-order fuzzy logic | fuzzy type theory (FTT) introduced in [4].

The main idea behind formal interpretation of intermediate quantifiers consists in the observation that intermediate quantifiers are just classical quantifiers 8 or 9 whose universe of quantifiers is modified using an evaluative linguistic expression (the latter are expressions such as \very small", \roughly big", \more or less medium", etc.). Consequently, intermediate quantifiers are in this theory defined as special formulas of FTT (not as new logical symbols). Hence, all proofs involving intermediate quantifiers are carried out in the basic formal system of FTT. Let us also mention that this theory is closely related to the theory of generalized quantifiers [6] and its formal generalization [1, 2]. Besides detailed analysis of the meaning of intermediate quantifiers, the Peterson's book [7] presents also 105 valid generalized syllogisms. Example of such syllogism is:
Almost all Y are M
All M are X
Some X are Y

In [3], formal proofs of validity of all of 105 generalized syllogisms has been proved also in our formal theory which is a strong argument for its substantiation. We will discuss generalized syllogism in the second part of this paper.

30-Minute Talks

Some cases of vague quantity
Stephanie Solt (ZAS)

In this talk, I will survey some linguistically interesting cases of vagueness in the expression of quantity, including the context-dependent quantifiers many and few, the vague quantifier most, the approximate use of round numbers, and the pragmatically enriched interpretations of modified numerals.  Emphasis will be on examining approaches and mechanisms that may be applied to their formal semantic analysis, and on highlighting open questions where insights from fields beyond linguistics might be fruitfully brought to bear.

Coherent probabilistic quantification, existential import and Aristotelian syllogistics
Pfeifer, Niki, Giuseppe Sanfilippo & Angelo Gilio

Aristotelian syllogisms are two-premise arguments which are formalizable in monadic first-order logic. The building blocks are composed of three affirmed or negated predicates, which are universally or existentially quantified. Traditionally, universally quantified statements (e.g., All X are Y) are assumed to be non-empty (i.e., not vacuously true). The resulting requirement - there is at least one X - is called "existential import". We present intuitions towards a coherence based probability semantics for Aristotelian syllogisms. We argue why we use the coherence approach to probability, which goes back to de Finetti. We propose new probabilistic notions of existential import and different probabilistic interpretations of quantifiers like "Most" and "Almost all".  Finally, we relate the resulting probabilistic syllogisms to Peterson's (2000) frequency semantics of Aristotelian syllogisms. The main goal of our work is to provide a philosophically well-founded and psychologically plausible coherence based probability semantics for Aristotelian syllogisms, that allows for exceptions and takes the uncertainty into account, which is almost always present in everyday reasoning.

Reasoning with vague quantifiers
Maria Spychalska

In our presentation we discuss reasoning with quantifiers in natural language.  We take into account direct inferences, including scalar implicatures as well as syllogistic reasoning. We investigate such quantifiers as "most", "some", which are sometimes considered as vague. We hypothesize that indeed a typical or pragmatical reading of those quantifiers involves given proportions or quantities, however it does not exclude the wider, logical meaning (e.g. "most" will be typically used for proportions close to 80%, but it still may be understood as "more than half"). We speculate about how the two meanings are established in language and what roles they are playing, proposing that the logical meanings is more important in the passive language comprehension, while the pragmatical in the active language production. We also shortly present the progress of our work which aims at experimental investigation of our hypotheses.

On Hajek's fuzzy quantifiers "probably" and "many"
Petra Cintula

In this talk we review Hajek's study of generalized fuzzy  quantifiers with the stress on two prominent examples: "Probably" and  "Many". We will consider their motivation, study their syntax and  (several kinds of) semantics, and state several completeness theorems. We will also explore the relation of the generalized fuzzy quantifiers  in Hajek's sense with fuzzy logics with modalities and the general area  of study of the so-called `measures of uncertainty'.

Is there a role for fuzzy logic in linguistics?
Chris Fermüller

Fuzzy logicians usually acknowledge that vagueness is a linguistic phenomenon. Indeed the literature in fuzzy logic abounds with terms like "linguistic fuzzy modeling", "linguistic variable", and  "linguistic hedge". However none of this literature seems to be well connected to formal semantics of natural language as a sub-discipline of contemporary linguistics. In fact, semanticists often take it for granted that it has been shown decades ago - e.g., in a classic paper by Hans Kamp - that truth conditions for sentences involving vague expressions cannot be modeled adequately using fuzzy logic. In this talk we will attempt to sort out some relevant misunderstandings and  formulate challenges to both, those who reject the use of fuzzy logic  in linguistics and those who claim that fuzzy logic is an appropriate  tool for modeling vague language.

Common ground and granularity of referring expressions
Raquel Fernández

Speakers can refer to the same entities in may different ways. For instance, a speaker may choose to refer to a meeting time with the expression `in the morning' or with the more precise, numeric expression `at 10:30am'. The relevant difference here is one of level of precision or granularity. The common assumption is that the pragmatic appropriateness of choosing a level of granularity depends on the contextual situation and the purpose at hand. What this assumption exactly entails, however, is often left unexplained in theoretical accounts and has certainly not been sufficiently investigated experimentally. In this talk, I will describe the key objectives and plans of a new joint project with Dale Barr (Glasgow) and Kees van Deemter (Aberdeen), supported by the ESF Euro-XPRAG scheme, that aims at investigating to what extent cooperative speakers choose the level of granularity of their utterances as a function of what they consider to be their common ground with their addressees.

Numbers and vague quantification in Alor Pantar languages: some initial observations
Marian Klamer & Antoinette Schapper

EuroBABEL project: Alor Pantar languages: origins and theoretical impact

One of the topics we are currently investigating in the Alor Pantar (AP) languages (Papuan, eastern Indonesia) concerns the structure of simple and complex numbers and numerical expressions in the AP languages. Many of the languages show traces of a quinary (base-5) system in the lower cardinals (e.g. Teiwa yes haraq 'five two'= 'seven'). In our talk we first present the structure of simple and complex number words in AP languages and give some generalisations about these. We address the question which numbers may be used to express approximate quantity - if any. Then we present some descriptive data on the use of the quantifiers 'much/many', 'a little bit/few'. Apart from encoding a mass/count noun distinction, quantifiers may also encode positive or negative evaluation of quantity, e.g. Teiwa iga' 'many/much (more than expected)'.

Color terms and quantities: an experimental account
Marijan Palmovic & Gordana Hrzica

Pursuing the issue of vagueness and categorization, we expanded the initial approach of disassociating the experimental effects related to vagueness and matching/mismatching categories.

In the first pair of experiments we tried to define the electrophysiological trace of categorization of color terms in terms of “processing costs”. An early frontal negativity has been obtained in the mismatch condition (color term – color rectangle presented on the computer screen). The same effect, but smaller was obtained for the matching conditions when colors vaguely matched the color term. More data is necessary to confirm the effects of mismatch and vagueness in the later latencies (400 ms onwards).

In the second pair of experiments we tried to dissociate Approximate Number System (ANS) and the symbolic number system. In the first experiment the participants had to judge whether two groups of circles shown on the screen differ in more than 3 circles. The number of circles covers all ratios from 1:1 to 9:8. The results indicate left frontal and temporal negativity (F7, T7) in the 300-600 ms range and a parietal positivity in the approximately the same interval, but only for small number of circles (1 to 3). No difference between large number of circles has been found irrespectively of the difference (i.e <3 or >3).

The second experiment was the same as the first one, except the presentation of the quantities; instead of circles the participants were shown the numerals. The preliminary results show differences in the 400-700 ms interval on the frontal electrodes. A comparison between the two experiments shows stronger frontal activation in the frontal regions in the first experiment indicating processing costs relating to the non-numerical stimuli presentation in an experiment that requires subtraction. However, the results were obtained on only 15 participants, so further refinement of the data should be done.

Quantifier use in English and German: an online study
Rasmus Bååth, Uli Sauerland, & Sverker Sikström

We investigate to cross-linguistic differences in the use of quantifiers. At this point, we have data from English and German. Quantificational expressions in English and German are similar, but especially among indefinite expressions differences exist.  We present preliminary results from an online survey that test to what extent quantifier use can be predicted by semantic meaning.

Model Theory for Fuzzy Predicate Languages
Pilar Dellunde

This talk is an introduction to the syntax and semantics of fuzzy predicate logics and to its model theory. Model theory is the branch of mathematical logic that studies the construction and classification of structures. Construction means building families of structures, which have some feature that interest us. Classifying a class of structures means grouping the structures into subclasses in a useful way, and then proving that every structure in the collection does belong in just one of the subclasses. I will pay special attention to the class of reduced structures and to the role of equality in fuzzy predicate languages, discussing the possibility to express quantities by means of sentences with the equality symbol.

Modelling the pragmatic effects of approximation
Chris Cummins

Approximate and vague numerical expressions of quantity convey a limited amount of semantic information, but recent experiments suggest that this can also be pragmatically enriched.  In this presentation, I first discuss data concerning the scalar implicatures arising from comparative quantifiers ("more/fewer than"). I consider the potential role of a multiple constraint satisfaction model in accounting for these enrichments, and the relation of this model to general considerations of communicative relevance.  I then consider the pragmatic effects associated with the use of vague or approximative numerical quantifiers, with particular reference to how the hearer can be expected to infer the knowledge state of the speaker.  Finally, I suggest how these effects can be modelled by the constraint-based approach, and consider the predictions made by this model about the interpretation of such expressions.

Vagueness at all orders
Denis Bonnay

In the talk, I will examine several accounts of vagueness understood as a form of epistemic uncertainty, that is, essentially, as deriving from our limited powers of discrimination, and I will discuss how this first-order uncertainty may, or may not, propagate at higher orders. In particular, different ways of making sense of the idea of limited higher-order vagueness will be presented in connection with the notion of clarity. A side goal will be to provide more realistic formal representations of vagueness by connecting margin for error models and the signal detection model used in psychophysics.

Precision, vagueness, scales and the Back-Down Phenomenon
Alan Bale

In principle, there is a difference between an underlying vague semantic representation versus a precise representation which in practice is treated imprecisely (whether through the implementation of verification procedures like estimation or through deliberate imprecision in order to achieve a conversational effect). However, often it is difficult to detect this difference empirically which can lead to some potential confusions especially with scales and notions of granularity. This issue is of utmost importance for semanticists. The details of imprecise usage lie beyond core semantic theory. In contrast, the details of vague semantic representations are central to it. In this talk, I will review one potential empirical means of detecting this difference (namely the Back-Down Phenomenon) and apply it to sentences which are thought to involve measurement and scales. I will attempt to demonstrate that some linguistic data, which has been treated as examples of vague representation, are best classified as examples of imprecise usage.

Contextual models of vagueness and vague quantifiers
Christoph Roschger

We discuss the expressiveness of degree- and delineation-based models of vagueness highlighting the approaches by Barker and Kyburg/Morreau. We show in which respect such approaches are equivalent and outline in which manner they can be combined. Moreover, we investigate possibilties to extend these approaches to vague quantifiers and point out possible implications of these extensions.

Approximate number and the meaning of "most"
Justin Halberda

I present evidence for an interface between vision, numerical cognition and the semantics of quantifier terms.  The goal is to highlight a case where non-visual cognition (lexical meanings) interfaces with vision and visual limits (tracking multiple sets) constrain later cognition.  Along the way I discuss the Approximate Number System, vagueness, and the (non)-relation between meaning and verification.

Other