Mathematics, mathematical logic
For many years, I have published across a wide range of journals and disciplines, including mathematics, physics, biology, neuroscience, medicine and philosophy. Now, having no further need to expand my scientific output or advance my academic standing, I have chosen to shift my approach. Instead of writing full-length articles for peer review, I now focus on recording and sharing original ideas, i.e., conceptual insights and hypotheses that I hope might inspire experimental work by researchers more capable than myself. I refer to these short pieces as nugae, a Latin word meaning “trifles”, “nuts” or “playful thoughts”. I invite you to use these ideas as you wish, in any way you find helpful. I ask only that you kindly cite my writings, which are accompanied by a DOI for proper referencing.
BLURREDNESS AS REPRESENTATIONAL FAILURE: RAMSEY-INSPIRED METRICS FOR INTERNAL CLARITY IN AI JUDGMENT
Modern artificial intelligence (AI) systems are increasingly deployed in sensitive domains like healthcare, finance and autonomous technologies, where reliable decision-making is crucial. Today, most approaches to AI reliability focus on how confident the system is in its predictions. Methods such as Bayesian networks, dropout-based approximations and ensemble learning attempt to quantify uncertainty, mainly by analyzing the output probabilities. However, these techniques often overlook a key issue: whether the internal representation of the input itself is meaningful or clear. A model might feel “confident” in its output, while internally holding a confused or distorted view of what it was asked to process.
To address this gap, we propose a new approach inspired by the work of philosopher Frank P. Ramsey. In an unpublished manuscript preserved in the Frank P. Ramsey Papers (University of Pittsburgh Archives of Scientific Philosophy, Box 2, Folder 24), he introduced the idea of blurredness. i.e., the notion that a belief may be unclear not because the object is uncertain, but because the representation of that object is internally vague or unfocused. He was among the first to shift attention away from what is known toward how clearly it is represented. This insight can be translated into the realm of machine learning: even if an AI model is technically confident, it may still be acting on “blurry” internal perceptions. Building on this idea, a method can be built aiming to quantify representational clarity inside a model, focusing on how input data are structured in the model’s latent (hidden) space. Two main metrics can be proposed:
- Prototype Alignment – This measures how closely a model’s internal representation of an input matches the central example of its predicted class.
- Latent Distributional Width – This captures how tightly or loosely clustered similar inputs are, indicating the precision or “blur” of internal categories.
Unlike traditional uncertainty methods, this approach works inside the network, offering a diagnostic tool helping identify whether the model’s “understanding” is structurally sound. This internal perspective has several potential advantages. It may enable earlier detection of model failures, reveal why models perform poorly under certain conditions and guide training toward more robust internal reasoning. It also opens new experimental directions: for instance, testing whether models with high clarity scores resist adversarial examples better, or whether clarity metrics can predict generalization on unseen data.
By incorporating Ramsey’s philosophical insight into AI, we gain a richer understanding of what it means for a machine to know clearly—not just predict correctly.
QUOTE AS: Tozzi, Arturo. 2025. Blurredness as Representational Failure: Ramsey-Inspired Metrics for Internal Clarity in AI Judgment. July. https://doi.org/10.13140/RG.2.2.28329.51042.
A PRETOPOLOGICAL APPROACH TO MODAL REASONING: A NEW LOGICAL APPROACH IN CONTEXTS WHERE ONLY PARTIAL OR LOCALLY AVAILABLE KNOWLEDGE IS RELEVANT
In many scientific and technological settings—from distributed computing and sensor networks to biological systems and artificial intelligence—agents must make decisions based on partial, local, or uncertain information. Traditional modal logic has long been used to reason about such knowledge and possibility, often relying on structures like Kripke frames or generalized neighborhood models. However, these existing methods either assume too much global structure (as in Kripkean accessibility relations) or allow too much arbitrariness (as in unconstrained neighborhood semantics), which can make them unsuitable for systems grounded in limited, local knowledge.
To bridge this gap, we propose a new method: Pretopologically-Neighborhood Modal Logic (PNML). This approach grounds modal reasoning in pretopological spaces, a mathematical framework that maintains minimal but essential structure. In pretopology, each point (or world) is assigned a set of neighborhoods that satisfy two simple rules: they must include the point itself and be upward closed (i.e., containing larger sets if they contain a smaller one). Unlike traditional topology, this framework does not require closure under intersection—making it flexible enough to model fragmented, disjoint, or observer-relative perspectives. In PNML, modal statements are interpreted locally: something is “necessarily true” at a point only if it is true across at least one of its valid neighborhoods. This reflects how real agents (such as cells, nodes in a network, or decision-makers) operate with local snapshots of their environments. Unlike many standard modal systems, PNML avoids the rule of necessitation and does not enforce strong global inference rules. This allows it to represent weak, non-normal, and dynamic forms of reasoning under uncertainty.
The novelty of this logic lies in its balance between minimal structure and formal rigor. It offers a logically complete, semantically sound framework that respects informational constraints without sacrificing analytic clarity. Compared to topological logics, it is less rigid; compared to standard neighborhood models, it is more disciplined. PNML opens new research directions and applications. In biology, it could model how cells make decisions based on local signaling. In computing, it can capture agent-based protocols with restricted information access. Future research might include modal updates over time, multi-agent extensions, and formal proof systems aligned with PNML’s structure. Experiments could compare PNML’s modeling accuracy with classical logic in systems like belief revision or distributed diagnostics, testing its advantage in scenarios of local, partial observability.
In short, PNML offers a fresh way to model reasoning in real-world systems where knowledge is local, partial, and structured—yet never fully global.
QUOTE AS: Tozzi A. 2025. Pretopological Modal Logic for Local Reasoning and Uncertainty. Preprints. https://doi.org/10.20944/preprints202506.1902.v1