Colloquium Logicum 2014

4-6 September 2014
Munich, Germany

The Colloquium Logicum is organized every two years by the DVMLG, which is the German association of logicians in mathematics, philosophy and theoretical computer science.

In 2014 the conference will take place in Munich at the Universität der Bundeswehr München, Neubiberg, in the vicinity of Munich.


The conference will cover the whole range of mathematical logic and the foundations of the exact sciences, in particular, logic in philosophy, computer science and artificial intelligence.

Besides the regular scientific programme it is planned to include a PhD Colloquium with invited presentations of excellent recent PhD graduates.

The programme committee invites the submission of abstracts of papers of all fields of research covered by the DVMLG. Abstracts should have between 100 and 500 words and are to be submitted via the EasyChair submission page (see information on the right-hand side). Authors of papers will be notified about acceptance soon after that.

The Colloquium Logicum 2014 is generously funded by the Deutsche Forschungsgemeinschaft (LO 834/11-1).

Conference Venue, Accommodation, Directions

Regular participants need to organize their own accommodation!
Invited Speakers

PhD Colloquium Speakers

Programme Committee

Submissions

Local Organizing Team

Vasco Brattka (Chair), Birgit Elbl, Guido Gherardi,
Peter Hertling, Gisela Krommes, Tahina Rakotoniaina, Christoph Spandl.

Contact


Institute for Theoretical Computer Science, Mathematics and OR
Faculty of Computer Science
Universität der Bundeswehr München
Werner-Heisenberg-Weg 39
85577 Neubiberg, Germany

Schedule


You can import this ICAL Calendar Link into your own calendar.

Abstracts of Accepted Papers

Jesse Alama. How much do I need you? Let me count the ways
Abstract: If f is a theorem of an axiom system S, there exists at least one subset X of S such that f is derivable from X but not from any proper subset of X. Every member ? of such an X can thus be said to be needed for proving f. But given that ? is needed to prove f, one might reasonably wonder how much, intuitively, one needs ?. The idea that a statement is used a certain number of times in a proof receives some intuitive content from everyday mathematical experience. It also seems to be an attractive target for automated theorem provers. We aim to provide a formal analysis that helps to measure how much an axiom is needed (assuming that the axiom is indeed needed for deriving a certain theorem relative to a certain set of premises). Some formal analyses of the informal notion of "need" are shown, despite their initial plausibility, to be inadequate, while other analyses remain plausible and can, further, be (to some extent) automated. Concrete examples of the techniques are provided, thereby illustrating, for a variety of theorems in specific axiomatic theories.
Geghard Bedrosian and Frederik Herzberg. Ultraproduct construction of representative utility functions with infinite-dimensional domain
Abstract: This paper applies model theory to macroeconomic theory. In mathematical models of macroeconomic theory, the hypothesis of a ``representative agent'' is ubiquitous, but the search for a rigorous justification has so far been unsuccessful and was ultimately abandoned until very recently. Herzberg (2010) constructed a representative utility function for finite-dimensional social decision problems, based on an bounded ultrapower construction over the reals, with respect to the ultrafilter induced by the underlying social choice function (via the Kirman--Sondermann (1972) correspondence). However, since the decision problems of macroeconomic theory are typically infinite-dimensional, Herzberg's original result is insufficient for many applications. We therefore generalise his result by allowing the social alternatives to belong to a general reflexive Banach space $W$; in addition to known results from convex analysis, our proof uses a nonstandard enlargement of the superstructur eover $W\cup\mathbb{R}$, obtained by a bounded ultrapower construction with respect to the Kirman--Sondermann ultrafilter.
Javier Legris. Existential Graphs as a Framework for Introducing Logical Constants and Logical Systems
Abstract: Charles S. Peirce conceived his diagrammatic logic of the Existential Graphs (EGs) as the best formulation of logical systems, according to his iconic conception of logic and his idea of “exact logic”. Consequently they constituted a general methodology for the mathematical treatment of deduction based on notions from topology. Thus, the EGs open a fully new direction in symbolic logic. Peirce formulated different diagrammatic systems for the EGs that have been intensively studied and expanded in the last years.
In this paper a new interpretation of Peirce’s EGs is introduced by applying them as a framework for defining logical constants and logical systems. This interpretation is coherent with the idea of the ‘productive ambiguity’ of diagrammatic representation implied in Peirce’s own conception. Diagrams are open to different interpretations because of their purely structural nature. Furthermore, this interpretation attempts to build up a bridge between Peirce’s EGs and the current perspective in logic of Structural Reasoning. According to this, EGs could be properly understood as a kind of Structural Reasoning, in the sense used by Peter Schroeder-Heister and originated in Gerhard Gentzen’s ideas. Instead of using sequent style systems, the EGs can be used in order to formulate general properties of deduction and to define logical concepts. Instead of combinatorial analysis and recursion (as in the case of sequent systems), EGs can be studied by topology. In this sense, the EGs can be interpreted as an account of logical structures. More specifically, the ‘scroll’ formulated by Peirce in early formulations of the EGs will be used to express an implication structure.
The main claims of the paper can be summarized as follows: (1) diagrams can be used in two different senses: (a) as an intuitive way to express the nature of deduction in terms of diagrams, (b) as a mathematical tool to characterize logical constants of different logic systems. (2) The theory for formulating logic systems evolves from a coherent philosophical perspective, so that there is a clear correspondence between logical theory (including a topological framework) and philosophical theory (the iconic nature of deduction). (3) The philosophical theory does not make deduction and logical concepts dependent on ordinary language.
Ralf Schindler, Sandra Uhlenbrock and W. Hugh Woodin. Producing Iterable Models with Woodin Cardinals from Optimal Determinacy Hypotheses
Abstract: Projective determinacy is the statement that for certain infinite games, where the winning condition is projective, there is always a winning strategy for one of the two players. It has many nice consequences which are not decided by ZFC alone, e.g. that every projective set of reals is Lebesgue measurable. An old so far unpublished result by W. Hugh Woodin is that one can derive specific countable iterable models with Woodin cardinals from this assumption. Work by Itay Neeman shows the converse direction, i.e. projective determinacy is in fact equivalent to the existence of such models. These results connect the areas of inner model theory and descriptive set theory. We will give an overview of the relevant topics in both fields and, if time allows, sketch a proof of Woodin's result.
Merlin Carl. Algorithmic Randomness for Infinitary Machines
Abstract: During the last decade, a number of machine models of transfinite computability have been proposed and studied, such as Infinite Time Turing Machines, Infinite Time Register Machines and Ordinal Turing Machines.
We consider algorithmic randomness for machine models of infinitary computations. We show that a theorem of Sacks, according to which a real $x$ is computable from a randomly chosen real with positive probability iff it is recursive holds for many of these models, but is independent from $ZFC$ for ordinal Turing machines. Furthermore, we define an analogue of Martin-Löf-randomness for Infinite Time Register Machines and show that some classical results like van Lambalgen's theorem continue to hold.
Roman Murawski. Philosophy of logic and mathematics in Warsaw School (the case of Tarski and Mostowski)
Abstract: The aim of the talk is to analyze main trends and tendencies, main standpoints and
views in the philosophical reflection on mathematics and logic in Warsaw School between
the wars.
This was the time of intensive development of mathematics (Polish Mathematical
School) and of logic (Warsaw Logical School, Lvov-Warsaw Philosophical School) in
Poland. Hence a natural question arises whether this development of mathematics and
logic was accompanied by philosophical reflection on those disciplines, whether the researches
were founded on and stimulated by certain fixed philosophical presuppositions.
On the other hand philosophy of mathematics and logic is based on and uses certain
results of metamathematics, of the foundations of mathematics and of logic. Did logical
achievements influence the philosophical reflection?
The philosophical views of main logicians and mathematician (among them of Tarski
and Mostowski) will be presented and analyzed.
We come to the conclusion that Polish logicians and mathematicians being convinced
of the importance of philosophical problems and knowing quite well the current philosophical
trends treated logic and mathematics as autonomous disciplines independent
of philosophical reflection on them, independent of any philosophical presuppositions.
Therefore they sharply separated mathematical and logical research practice and philosophical
discussions concerning logic and mathematics. Philosophical views and opinions
were treated as “private” matter that should not influence the mathematical and metamathematical
investigations. On the contrary, in the latter all correct methods could and
should be used. This “methodological Platonism” enabled Polish logicians and mathematicians
to work in various areas without being preoccupied by philosophical dogmas.
In controversial cases, as for example in the case of the axiom of choice in set theory, their
attitude can be characterized as neutral – without making any philosophical declarations
they simply considered and studied various mathematical consequences of both accepting
and rejecting the controversial principles and investigated their role in mathematics.
Edoardo Rivello. Revision without Choice
Abstract: Revision sequences were introduced in 1982 by Herzberger and Gupta (independently) as a mathematical tool in developing what is since then known as "The revision theory of Truth". Later, the same tool was applied to other areas of logic and philosophy, like the theory of definitions, the analysis of circular concepts, the theory of rational belief, and others. In the original and informal presentation of the theory, the revision sequences are ordinal-length sequences, namely operations defined on the proper class of all ordinal numbers. However, it is a common feature of most revision theories proposed in the literature the fact that the same theory can be obtained by quantifying over the class of all limit-length revision sequences, so providing a formalisation of the theory in ZFC. Further, for a given theory, by an application of the Lowenheim-Skolem theorem, McGee’s theorem establishes an upper bound for the length of the revision sequences we need to quantify over.

McGee's theorem and analogous or related results, are usually proved by invoking some amount of the Axiom of Choice, in the form of the Lowenheim-Skolem theorem or in another. In this talk I will present a general method for reproving such results without using any form of the Axiom of Choice. The method applies to a wide range of revision theories encompassing Harzberger's, Gupta's and Belnap's original ones.

The talk is based on the following recent works (preprints available on academia.edu):
E. Rivello, Stability-Dependent Revision, submitted, 2013.
E. Rivello, Revision without the Axiom of Choice, submitted, 2014.
Thilo Weinert. On Partitioning Linearly Ordered Quadruples in Canonical Non-Choice-Contexts
Abstract: See pdf!
Franziska Jahnke. Definable Henselian Valuations
Abstract: In a henselian valued field, many arithmetic or algebraic questions can be reduced, via the henselian valuation, to simpler questions about the value group and the residue field. By the celebrated Ax-Kochen/Ershov Principle, in fact, if the residue characteristic is 0, `everything' can be so reduced:
the 1st-order theory of a henselian valued field (as a valued field) is fully determined by the 1st-order theory of its value group (as an ordered abelian group) and of its residue field (as a pure field).
Conversely, in many cases, a henselian valuation is so intrinsic to a field that it is itself encoded in the arithmetic of the field, i.e. its valuation ring is a 1st-order parameter-free definable subset of the field (in the language of rings). In this talk, we investigate the question when a henselian valued field admits such a non-trivial 0-definable henselian valuation.

Clearly, separably or real closed fields admit no non-trivial definable valuations, and, by the work of Prestel and Ziegler, there are further examples of henselian valued fields which do not admit a 0-definable non-trivial henselian valuation. We give conditions on the residue field which ensure the existence of a parameter-free definiton. Using recent work of Prestel, we deduce that in every case discussed in the talk the formula can be found to be either existential-universal or universal-existential.

The material presented includes joint work with both Jochen Koenigsmann and Arno Fehm.
Elena Tatievskaya. Symbol and Rule: Cassirer’s treatment of Kant’s Schematism
Abstract: Cassirer (The Philosophy of Symbolic Forms (1923-1929)) regards cognition as symbolizing that follows rules. Such rules are embodied in systems of meaning invariants used to order objects and govern scientific as well as pre-scientific cognition, language and art. Cassirer compares symbols with Kant’s transcendental schemata that mediate between categories and sensible appearances and uses the terms “schema” and “rule” as interchangeable. Hence it may be asked what the rule-like function of the schema consists in and whether this notion therefore could help to clarify Cassirer’s concept of a symbol and of how rules of symbolizing work.
I argue that transcendental schemata can be regarded as belonging to the rules of understanding that allow of judging. Using Krausser’s definition of the topological structure of the intuition of time („Kant’s Schematism of the Categories and the Problem of Pattern Recognition“, Synthese 33, 1976, 175-192) which presupposes the relations of simultaneity and following between different elements of a sensibly intuited manifold I propose to represent principles of pure understanding and so-called “schema-judgments” or “criteria” that connect schemata with categories in such a way that the principles define possible antecedents of the propositions that fix such criteria. The consequents of such propositions involve the expression of the form of the judgment that becomes possible due to the application of the schema. Thus a transcendental schema works as a rule according to which the category represented by the schema defines some logical property of the judgment that concerns the apprehended object. Such a property could be the quantity, quality, form or modality of the judgment.
If we take for example the categories of quantity and consider the number as “Anzahl” of the elements of a sensibly given manifold of which some property is predicated in a judgment the general principle of the axioms of intuition that requires that all intuitions be extensive quantities can be formulated in the form “Either an appearance differs from other appearances simultaneous with it insofar as they in their turn are not simultaneous with each other or some appearances of the given manifold differ with respect to their simultaneity from the other ones or the given manifold contains no appearances that are not simultaneous with at least one of its elements”. Then the proposition that represents the schema of e.g. the category of plurality is “If a given manifold contains an appearance that is simultaneous with only some of the other appearances and differs with respect to simultaneity from the remaining ones a certain property can be predicated of some of the appearances of the manifold”.
Cassirer’s symbol functions analogously to a schema representing a category since it orders objects due to its ability to represent a system of meaning invariants which in its turn determines the rule-like role of the symbol. The cognition is not restricted to the formation of judgments but involves the production of symbols of different kinds.
Sam Sanders. Computing with infinitesimals
Abstract: See attached PDF document.
Johannes Stern. A New Norm for Truth and the Modal Logics of Axiomatic Theories of Truth
Abstract: THIS IS THE TEX CODE. SEE PDF FOR THE EDITED VERSION OF THE ABSTRACT.

Due to a seminal result by Solovay it is well known that the modal logic GL is the modal logic of the natural provability predicate of PA in PA. In other words GL expresses exactly the modal properties of the natural provability predicate of PA. At least in principle one could try determining the modal logic of other sentential predicates of an arithmetic theory or an extension thereof. In particular it seems interesting to look for the modal logic of the truth predicate of various axiomatic theories of truth. While this question is interesting in its own right we shall focus on its philosophical relevance and argue that such an investigation leads to a new norm for theories of reflexive truth.

We argue for this this claim by discussing the recent arithmetic completeness result by Czarniecki and Zdanowski \cite{czz??}. Czarnecki and Zdanowski show that the normal modal logic $KDD_C$, which extends the basic normal modal logic K by the modal principles

\begin{flalign*}
&(D)&&\Box\phi\rightarrow\Diamond\phi&&\\
&(D_C)&&\Diamond\phi\rightarrow\Box\phi&&
\end{flalign*}

is the modal logic of the truth predicate of symmetric theories of truth (although Czarnecki and Zdanowski do not mention this fact). A theory of truth is called symmetric if and only if the truth predicate commutes with all boolean connectives and is closed under the rules of T-introduction and T-elimination. The ingenious strategy employed by Czarnecki and Zdanowski in showing the arithmetic completeness of $KDD_C$ with respect to symmetric theories of truth is to show that $KDD_C$ is the maximal normal modal logic that can be consistently extended by the relevant modal instances of the diagonal lemma. Since $KDD_C$ is sound with respect to symmetric theories of truth it then must be the modal logic of the truth predicate of symmetric theories of truth.

The observation that $KDD_C$ is a maximal modal logic lead us to proposing the following new norm for reflexive theories of truth:

\begin{quote}The modal logic of the truth predicate of reflexive theories of truth ought to be maximal (in the sense of Czarnecki and Zdanowski).\end{quote}

We argue that this norm should replace the naive desiderata that all T-biconditionals ought to be derivable, which, however, cannot be satisfied in classical and consistent theories of truth. We then illustrate and discuss the consequences of adopting such a norm and also point to certain limitations.
Yurii Khomskii. Full-splitting Miller trees and infinitely often equal reals
Abstract: We investigate two closely related partial orders of trees on the Baire space: the "full-splitting MIller trees" and the "infinitely often equal trees" as well as their corresponding sigma-ideals. The former notion was considered by Newelski and Roslanowski while the latter involves a correction of a result of Spinas. We consider some Marczewski-style regularity properties based on these trees, which turn out to be closely related to the property of Baire, and look at the dichotomies of Newelski-Roslanowski and Spinas for higher projective pointclasses. We also provide some insight concerning a question of Fremlin whether one can add an infinitely often equal real without adding a Cohen real.

This is joint work with Giorgio Laguzzi (Hamburg).
Carsten Rösnick. On Type-2 Complexity of Some Operators in Geometry, Topology and Analysis
Abstract: This talk will be about my yet-to-be-finished PhD thesis. In my thesis I analyze the computational complexity of some operators in geometry and analysis based on earlier work of Ko and Friedman (1981/84, 1991), Chou and Ko (1995, 2005), Brattka et al (1999, 2003), Labhalla et al (2001), Zhao and Müller (2008), Kawamura and Cook (2011), and others. Examples of operators include binary set union, intersection and projection, as well as integration, differentiation, reciprocals f \mapsto 1/f and inverses f \mapsto f^{-1}. These, and more, operators are realized (in the TTE-sense/"Weihrauch model") on oracle Turing machines where their realizers are type-2 functionals, i.e. functionals \phi of form \IN^{\IN} \times \IN \to \IN.

In order to speak about computations, we first have to introduce representations for spaces like the class of all closed subsets of \IR^n, or the class of continuous, smooth or analytic functions. Usually, there is more than one "natural" choice of representation, but as it will turn out, most of the ones we will see are polytime equivalent---thus, from the perspective of complexity, it does not matter which one we choose when devising the complexity of operators. This in a sense even proves the complexity bounds for operators to be 'robust'. However, obviously not all representations turn out to be polytime equivalent: For example, there are three quite natural representations in the realm of closed sets which are not polytime equivalent; the first two because of the difficulty to get global information out of local information, the second two due to the need to add additional data.

Having settled the question about some representations and their relations, we turn to operators: the projection for sets, as well as integration and function inversion for continuous functions. Projection proves to be polytime if restricted to convex compact sets; while the lack of convexity links it to P vs NP in the discrete world. Integration is known to map polytime functions to polytime ones when restricted to analytic functions, but over smooth functions it is not polytime---just like for continuous functions. So, in a sense, smoothness does not help from a complexity point of view. This gap between polytime over analytic and exptime over smooth functions can be quantified in terms of the Gevrey level (loosening the bounds on the growth of analytic functions depending on said level)---which we will do. It turns out that the complexity of integration depends exponentially on this level. We will also take a look at a third operator, function inversion f \mapsto f^{-1}, sufficient preconditions for this operator to be at least computable, and the barriers to surpass due to its linkage to the existence of one way functions and one way permutations.
Mirjana Ilic and Branislav Boricic. Is t really needed in formulating sequent calculi for relevant logics?
Abstract: The first Gentzen calculi for distributive relevant logics were presented in the late 1960's and early 1970's, when Dunn published Gentzen system for positive relevant logic, and Minc formulated the Gentzen system for positive relevant logic with necessity. In those calculi, an empty left-hand side, in the left premise of the cut rule can lead to irrelevance (the modal fallacy would be derivable, then).

There are two ways to prevent the inference of the modal fallacy in those calculi. However in both of them, the constant t, which is not the part of the standard language for relevant logics, is needed.

Giambrone and Brady also followed those ideas in formulating the sequent calculi for RW+ and RW, confirming the need for t in those logics, also. However, it has been shown recently, that it is possible to obtain the cut--free sequent calculus for RW without the help of t.

We present the sequent calculus GRW of RW. The calculus GRW is with right-handed sequents, based on unsigned formulae, unlike Brady's calculus L'RW of RW, which is based on single-conclusion sequents with signed formulae. The calculus GRW has a much neater presentation, although the proof that it is cut--free is somewhat more complex.
Bernhard Fisseni. What makes stories similar? – Some Empirical and Formal Results
Abstract: We present a survey of the results and findings of the research project *What makes stories similar?* funded by the John Templeton Foundation from October 2011 to May 2015.

The research project *What makes stories similar?* funded by the John Templeton Foundation from October 2011 to May 2015 and coordinated by Benedikt Löwe as principal investigator aimed at providing a discussion of similarity for narratives. Guided by its eponymous question and based on Löwe's previous research on comparability and (in)compatibility of formal frameworks, the project aimed to find out whether there are structural (rather than presentational) properties that contribute or even define story similarity, whether they can be expressed in formal representation systems, and how such representations can be empirically tested.
Giorgio Laguzzi. Roslanowski and Spinas dichotomies
Abstract: (the talk is bases on joint works with Yurii Khomskii and Wolfgang Wohofsky)

We investigate two tree forcings for adding infinitely often equal reals: the full splitting Miller forcing $\mathbb{FM}$, introduced by Ros{\l}anowski in \cite{ref-01}, and the infinitely often equal trees forcing $\mathbb{IE}$, implicitly introduced by Spinas in \cite{ref-02}. We prove results about Marczewski-type regularity properties associated with these forcings as well as dichotomy properties on $\mathbf{\Delta}^1_2$ and $\mathbf{\Sigma}^1_2$ levels, with a particular emphasis on a parallel with the Baire property. Furthermore, we prove that our dichotomies hold for all projective sets in Solovay's model, and that the use of an inaccessible is necessary for both. Finally we also investigate the Marczewski-type ideals related to $\mathbb{FM}$ and $\mathbb{IE}$, proving some results of orthogonality with other tree forcings ideals.
%% References are optional, follow the format
\begin{thebibliography}{22}
\bibitem{ref-01} A.Ros{\l}anowski, \emph{On game ideals}, Colloq. Math., 59(2):159-168, 1990.
\bibitem{ref-02} O.Spinas, \emph{Perfect set theorems}, Fund. Math., 201(2):179-195, 2008.
Philipp Lücke. Infinite fields with free automorphism groups
Abstract: Shelah proved that a free group of uncountable rank is not isomorphic to the automorphism group of a countable first-order structure. In contrast, Just, Shelah and Thomas showed that it is consistent with the axioms of set theory that there is a field of cardinality $\aleph_1$ whose automorphism group is a free group of rank $2^{\aleph_1}$. Motivated by this result, they ask whether there always is a field of cardinality $\aleph_1$ whose automorphism group is a free group of rank greater than $\aleph_1$.
I will present joint work with Saharon Shelah that shows that the free group of rank $2^\kappa$ is isomorphic to the automorphism group of a field of cardinality $\kappa$ whenever $\kappa$ is a cardinal satisfying $\kappa=\kappa^{\aleph_0}$. The techniques developed in the proof of this statement also show that the existence of a cardinal $\kappa$ of uncountable cofinality with the property that there is no field of cardinality $\kappa$ whose automorphism group is a free group of rank greater than $\kappa$ implies the existence of large cardinals in certain inner models.
Gregory Wheeler. Beware of B
Abstract: No epistemic logic should include the Brouwersche schema, (B), as (B) and (K) yield a pair of known schemas (Chellas 1980) which I dub the BUM schemas. BUMs are a disaster for epistemic logics in general, although BUMs are not problematic for provability logics, which suggests that (B) points out a sharp distinction between (regular) epistemic logics and logics of provability. To mark the distinction, and to preserve Brouwer’s good name, I propose epistemic logicians hereafter refer to (B) as the Blog Schema. Finally, with the downfall of (B) joining the ranks of (5) (Hintikka 1962) and (4) (Williamson 2000), hints of a new case against regular systems of epistemic logic comes into view: (K) does not play well with other modal schemas.
Sara L. Uckelman. Reasoning About Obligations in Obligationes: A Formal Approach
Abstract: Despite the appearance of `obligation' in their name, medieval obligational disputations between an Opponent and a Respondent seem to many to be unrelated to deontic logic. However, given that some of the example disputations found in medieval texts involve Respondent reasoning about his obligations within the context of the disputation, it is clear that some sort of deontic reasoning is involved. In this paper, we explain how the reasoning differs from that in ordinary basic deontic logic, and define dynamic epistemic semantics within which the medieval obligations can be expressed and the examples evaluated. Obligations in this framework are history-based and closely connected to action, thus allowing for comparisons with, e.g., the knowledge-based obligations of Pacuit, Parikh, and Cogan, and stit-theory. The contributions of this paper are twofold: The introduction of a new type of obligation into the deontic logic family, and an explanation of the precise deontic concepts involved in obligationes.
Hugo Nobrega. Obtaining Weihrauch-complete functions and relations from sets of real numbers
Abstract: We report on research that connects the theory of Weihrauch degrees to the descriptive set theory of classes of functions.
Among other previously known results, functions $f_n$ for each natural number $n$ and a relation $R$ had already been found which characterize the Baire class $n$ functions and the class of functions preserving $\mathbf{\Sigma}^0_2$ under preimages, respectively, in the sense that a given function is in one of the mentioned classes if and only if it is Weihrauch-reducible to the corresponding function or relation.

In this talk, we show that functions characterizing the Baire class $\alpha$ functions can be obtained by a general construction applied to any Wadge-complete set for $\mathbf{\Sigma}^0_\alpha$, for all $\alpha < \omega_1$, and show that the $f_n$ functions are special cases of this construction.
Furthermore, we show that relations characterizing the classes of functions with a certain type of partition property related to the famous Jayne-Rogers theorem --- namely, for each $\alpha < \omega_1$, the class of functions whose domain can be partitioned into countably many $\mathbf{\Pi}^0_\alpha$ sets such that the function restricted to each set in the partition is continuous --- can also be obtained by another general construction applied to any Wadge-complete set for $\mathbf{\Sigma}^0_\alpha$.
In particular, using the Jayne-Rogers theorem we show that $R$ can also be seen as a special case of this construction.
Peter Schuster and Davide Rinaldi. Eliminating Krull's Lemma for Horn Clauses
Abstract: In the language of rings we consider implications from a finite conjunction to a finite disjunction of atomic formulas. It is well-known that one can characterise algebraically when such implication can be proved for (commutative) rings and for integral domains, respectively. By interpolation, we extend this characterisation to the provability of such implications for reduced rings: i.e., rings in which only 0 is nilpotent. Scott entailment relations are particulary suited for proving those characterisations.

From the mono-conclusion case one can read off a syntactical proof that the theory of integral domains is conservative for Horn clauses over the theory of reduced rings. This allows to eliminate Krull's Lemma, the appropriate incarnation of Zorn's Lemma, from many a ``short and elegant'' proof in commutative algebra that works by reduction modulo a generic prime ideal.

With only little effort, the approach above can be carried over from the case of integral domains over reduced rings to the case of local rings over rings: just replace ideals by filters, and ``is zero'' by ``is invertible''. With hardly more effort one can also deal with occurrences of mixed predicates in the antecedent.

All proofs on the meta-level are constructive and elementary. Within the given theories, however, it is irrelevant whether one thinks provability in terms of classical or intuitionistic logic: all formulas under consideration are geometric implications.

In competition with related results from dynamical algebra, we eliminate also the condition that every integral domain or local ring be non-trivial, and thus avoid to speak of absurdity. This opens up the possibility to step down from classical logic not only to intuitionistic but also to minimal logic.
Takayuki Kihara and Arno Pauly. Point degree spectra of represented spaces
Abstract: We introduce the point spectrum of a represented spaces as a substructure of the Medvedev degrees. The point spectrum is closely linked to isomorphism type of a space w.r.t. countably continuous maps, and via this, also with the dimension. Through this new connection between descriptive set theory and degree theory (as part of computability theory) we can answer several open questions.

As a result on the way, we prove that any admissible represented space with an effectively fiber-compact representation is already computably metrizable.
Gunnar Wilken and Andreas Weiermann. Goodstein sequences for prominent ordinals up to the ordinal of Pi^1_1-CA_0
Abstract: We present a smooth approach to generate Goodstein
principles which are true but unprovable in theories of iterated
inductive definitions, ID_n, and eventually the theory Pi^1_1-CA_0.
This work was published in the Annals of Pure and Applied Logic in 2013.
Alexander Carstensen Block. The Modal Logic of Symmetric Extensions
Abstract: Symmetric forcing was first used by Paul Cohen to show the independence of the Axiom of Choice from ZF. It is a modification of the method of forcing, a flexible set theoretic model construction assigning to a given (countable transitive) model of ZF outer models, called generic extensions, each of which is "generated" from a single new object. The forcing method itself is unsuitable for establishing the independence of the Axiom of Choice from ZF, since every generic extension of a $\mathsf{ZFC}$-model also satisfies the Axiom of Choice.
So instead Cohen used symmetric forcing, considering symmetric submodels of generic extensions instead of actual generic extensions. Symmetric submodels are obtained by deleting objects whose descriptions (in terms of the so-called forcing language) are not sufficiently stable under a certain group action. This process may remove well-orderings of sets still present, thereby breaking the axiom of choice.
We refer to symmetric submodels of generic extensions in short as symmetric extensions. A central property of symmetric extensions (and generic extensions as well) is that truth in them can be expressed in the ground model one starts with. More precisely for every first-order sentence f in the language of set theory there is a first-order sentence f* in the language of set theory such that for any (countable transitive) ZF-model M we have that M \models f* iff for any symmetric extension N of M: N satisfies f.

Using this fact we can give a correspondence between formulas in the basic modal language and first-order sentences in the language of set theory as follows. A translation is a map assigning proposition letters to sentences in the language of set theory. We can extend any translation T to an interpretation ? assigning formulas in the basic modal language to first-order sentences in the language of set theory by recursively setting:
(1) ?(p) := T(p) for any proposition letter p;
(2) ?(-) := (0=1);
(3) ?(A+B) := ?(A) + ?(B) for any logical connective +;
(4) ?(? A) := [?(A)]*
With this we can define the ZF-provable modal logic MLS of symmetric forcing by setting MLS := {A | ZF proves ?(A) for any translation T}

This definition is analogous to the definitions of the corresponding modal logic of forcing as introduced and investigated by Joel Hamkins and Benedikt Löwe in [1]. Using an adaptation of their methods we show that MLS = S4.2 and therefore that MLS coincides with the modal logic of forcing.

References:
1. JOEL D. HAMKINS, BENEDIKT LÖWE, The modal logic of forcing. Transactions of the American Mathematical Society, vol. 360 (2008), no. 4, pp. 1793-1817
Andreas Kapsner. An Expressibility Problem for Constructive Logics
Abstract: This paper discusses a problem in M. Dummett’s argument for intuitionistic
logic raised by T. Williamson, who subsequently to raising it advocated a change
of the account of negation in intuitionistic logic to solve it. I will show that it
is actually questionable how well this change deals with the problem.
To sketch the problem, take the example Williamson discussed in [Williamson(1994)],
Goldbach’s conjecture. Dummett wants to say of Goldbach’s conjecture that
it may never be decided, else his argument for intuitionistic logic will not get
off the ground. Williamson argues that this claim is actually intuitionistically
inconsistent.
It is important that the problem Williamson points out is not with the claim
that it might be proved that neither Goldbach’s conjecture nor its negation can
be proved. He has a claim in his sights that would seem to be much less controversially acceptable to all parties involved: That it might happen that when
the last mathematicians have breathed their last breaths Goldbach’s conjecture
stands unresolved. In this scenario, no one will have managed to prove either
Goldbach’s conjecture or its negation. However, also no proof of its undecidability
in principle will have been discovered, either because such a proof is
impossible or because, again, simply no one managed to find the proof.
In addition to giving a detailed analysis of the problem he raises, Williamson
suggest a solution to the problem. This solution involves changing the intuitionistic account of negation. Instead of saying that not-A is verified by showing that a verification of A is impossible, we should ask for something stronger.
To verify not-A, we need to be able to falsify A. This idea can be funneled into
a logical system quite nicely, and one ends up with what is known as Nelson
logic (cf. [Nelson(1949)]). Williamson claims that this move will solve the problem he raised, that is, assuming Nelson logic, we can say that there are undecidable statements.
However, I will show that this is not true in the sense he means it. Employing
the formal analysis of the problem he chose and the usual Kripke semantics for Nelson logic, one can come to see that it is still impossible to assert the
following: We might, just due to bad luck and a lack of inspired ideas, never
decide Goldbach’s conjecture. Intriguingly, what does become possible is to
assert the seemingly stronger claim that we might find out the impossibility
of deciding Goldbach’s conjecture. That is, we can say that there might be
statements for which we’ll learn that we will never verify or falsify them. But
the case Williamson had in mind, where we would like to say that it is possible
that we may simply never find a proof without knowing that we won’t, can not
be accounted for. The paper ends by outlining some morals we might draw from
this problematic situation.

References
[Nelson(1949)] D. Nelson. Constructible falsity. Journal of Symbolic logic, 14/1:
16–26, 1949.
[Williamson(1994)] T. Williamson. Never say never. Topoi, 13(2):135–145, 1994.
Walter Hoering. Walter Hoering: A Formalism allowing Non-determnistic Strategies
Abstract:
Turing and Feferman have shown, how to interpret and use Gödel's incompleteness
theorem in a positive way - i.e. as a tool to construct progressions of recursively
enumerable theories. These theories work with constructive ordinals and are either effectively realizable and recursively enumerable, therefore Gödel-extendible, using as they do, only effective rules and only a recursively enumerable part of the constructive ordinals, or they reach beyond he effective, using some version of the omega-rule or a non-trivial section of the constructive ordinals.
If one deals with these progressions, one can work within a conventional system or
abandon it fore another one, which may be stronger. This unconditioned and permanent capacity for choice is beyond the capacities of a conventional formal system. At this point we cannot but ask for the difference between the capacities of a human being and the possibilities of a conventional formal system. The answer is quite simple: the difference lies in the freedom of man, in his capacity for random choice. So we come to look for a new kind of formal system featuring this capacity. We construct a new kind of formal system, the non-deterministic systems which have non-determinism built in in a presumably novel way. It seems to go beyond the recursively enumerable.
We call a system non-deterministic, if it can take a random input or alternately also a deterministic input at the same place. Some of these systems are capable of strengthening themselves indefinitely by dint of their own strength, just as Baron Münchhausen pulled himself and the horse upon which he was riding out of a pernicious swamp just by pulling up his own tuft of hair. Therefore we propose the name of Münchhausen-Systems for this kind of system. Another adventure of the Baron was that, riding on a canon ball, he suddenly decided to change direction by jumping onto another canon ball approaching from the opposite direction. Also this story will have a counterpart here. Only that the rather pedestrian analogue of the Baron's instantaneous resolution, a random generator, will be used to create an infinite sequence of natural numbers and thereby of systems.
Turing in his conception of a device, later to be called Turing machine, dealt with linear computation, proceeding without branching:. "There is only one way to go on, straight ahead".
Housewives and artisans know: If I do one thing, I will have to skip something
other. It is this kind of inescapable choice between many alternatives, which we model in our systems.
Our subject is computability, deducibility if free, non-determined decisions are allowed to intervene an infinite number of times: "There are always many directions to go on" In our Münchhausen Systems (M-systems for short) we always face the alternative: to stay where we are and to continue working in the present surrounding - or - to change into a new environment.
If one is interested in evaluating sets like O, Kleene's constructive ordinals, or B the set of recursive well-founded trees one deals with positive inductive definitions I.e. with productive sets - to which M-Systems can be applied.
Vasco Brattka, Guido Gherardi and Rupert Hölzl. Probabilistic computability and choice
Abstract: The aim of this talk is to give an overview of the systematic investigation we have carried out onto the field of randomized algorithms for real numbers.
In particular, the notions of Las Vegas and probabilistic computability for infinite sequences are presented.
As case studies we analyze the problems of computing Nash equilibria and solutions to the intermediate value theorem.
Yong Cheng, Sy-David Friedman and Joel David Hamkins. Large cardinals need not be large in HOD
Abstract: We prove that large cardinals need not generally exhibit their large cardinal nature in HOD. For example, a supercompact cardinal \kappa need not be weakly compact in HOD, and there can be a proper class of supercompact cardinals in V, none of them weakly compact in HOD, with no supercompact cardinals in HOD. Similar results hold for many other types of large cardinals, such as measurable and strong cardinals.