Mathematics, mathematical logic

Arturo Tozzi

Former Center for Nonlinear Science, Department of Physics, University of North Texas, Denton, Texas, USA

Former Computationally Intelligent Systems and Signals, University of Manitoba, Winnipeg, Canada

ASL Napoli 1 Centro, Distretto 27, Naples, Italy

tozziarturo@libero.it

For years, I have published across diverse academic journals and disciplines, including mathematics, physics, biology, neuroscience, medicine, philosophy, literature. Now, having no further need to expand my scientific output or advance my academic standing, I have chosen to shift my approach.  Instead of writing full-length articles for peer review, I now focus on recording and sharing original ideas, i.e., conceptual insights and hypotheses that I hope might inspire experimental work by researchers more capable than myself.  I refer to these short pieces as nugae, a Latin word meaning “trifles”, “nuts” or “playful thoughts”.  I invite you to use these ideas as you wish, in any way you find helpful.  I ask only that you kindly cite my writings, which are accompanied by a DOI for proper referencing.

 

 

NUGAE – PROTEIN FRUSTRATION EXPLAINED BY THE GEOMETRIC APPARATUS OF DVORETZKY’S THEOREM?

Protein frustration consists of competition among local interactions that cannot be simultaneously satisfied, producing rugged funnels with detours, kinetic traps and off-pathway intermediates. Current tools like local frustration indices, decoy comparisons, residue–contact statistics and brute-force molecular dynamics can map hot spots, but do not explain why efficient folding persists in high-dimensional spaces, nor do they provide principled bounds on where frustration must concentrate.

We reframe the problem geometrically using Dvoretzky’s theorem, which guarantees that any sufficiently high-dimensional normed space contains low-dimensional subspaces whose geometry is almost Euclidean.  We model a protein’s conformational space as a normed vector space in which distances track structural and energetic displacements and posit that productive folding proceeds primarily within near-Euclidean “Dvoretzky corridors,” where distances scale isotropically, gradients are well conditioned and diffusive search is efficient. Frustration then emerges at corridor boundaries, i.e., interfaces where the local norm departs from Euclidean behavior and competing interactions warp geometry, creating traps or functionally poised states. We operationalize this with a Dvoretzky Distortion/Frustration Index (DFI): the minimal ε for which a k-dimensional neighborhood is (1±ε)-Euclidean, estimated from: ε-distortion under randomized projections, local metric conditioning from the energy Hessian and diffusion anisotropy along principal collective variables. Evolution, under this view, tunes sequences so that the native route aligns with low-DFI corridors, while allosteric and catalytic elements reside near high-DFI boundaries facilitating switchability.

Compared with contact-based or decoy approaches, our mathematical framework is intrinsic and coordinate-free, yields logarithmic scaling with dimensionality, provides constructive algorithms for locating corridors without exhaustive sampling and rationalizes why a few collective variables often predict kinetics: it’s because they approximate the low-dimensional Euclidean subspaces Dvoretzky guarantees.

To test our theory, different approaches can be pursued: combine enhanced-sampling MD (metadynamics/REMD) with transition-path analysis; compute DFI maps via random projections and Hessian conditioning; validate with smFRET, NMR relaxation dispersion, HDX-MS, optical-tweezers kinetics and deep mutational scanning.

Hypotheses include that native transition paths preferentially concentrate within low-DFI corridors, where the local geometry is nearly Euclidean and gradients are smooth, ensuring efficient search and rapid convergence to the native state. Mutations that raise local DFI are expected to perturb this alignment, forcing trajectories into distorted boundary regions; as a result, folding slows, thermodynamic stability decreases, and aggregation propensity increases. Residues already recognized as allosteric or frustrated should cluster at the corridor boundaries, because here geometric distortion enhances sensitivity to perturbations and facilitates switching between conformational states. Finally, molecular chaperones are predicted to lower the effective DFI along productive folding routes by stabilizing near-Euclidean subspaces and shielding boundary regions, thereby smoothing the folding funnel and reducing kinetic traps.

Our geometry-first view may have practical implications: enable sequence redesign to smooth target corridors; prioritize boundary regions for ligand stabilization or allosteric control; guide chaperone engineering; interpret disease mutations by their DFI shift.  Future avenues could include finite-n bounds for biomolecular sizes, co-translational folding on the ribosome, crowding-dependent DFI in condensates and membranes and integrating DFI regularization into generative protein models for kinetics-aware design.

Overall, Dvoretzky’s theorem may provide a geometric rationale for why frustration arises in proteins, suggesting that it emerges naturally at the interfaces between Euclidean-like folding corridors and the distorted regions of conformational space. 

 

QUOTE AS: Tozzi A. 2025.  Nugae -protein frustration explained by the geometric apparatus of Dvoretzky's theorem?   DOI: 10.13140/RG.2.2.36709.26085

 

Picture of folding frustration seen through Dvoretzky’s theorem. Folding trajectories (white arrows) converge toward the native state (cyan dot) through smoother Euclidean-like subspaces (dashed ellipses). In high-dimensional conformational space, Dvoretzky’s theorem guarantees the presence of such slices, where geometry is nearly isotropic and folding is efficient. Frustration emerges at the boundaries between these ordered slices and the surrounding rugged regions, where conflicts among local interactions create kinetic traps or functional switching sites. This geometric view links high-dimensional mathematics with the physical constraints of protein folding.

 

 

 

 

 

BLURREDNESS AS REPRESENTATIONAL FAILURE: RAMSEY-INSPIRED METRICS FOR INTERNAL CLARITY IN AI JUDGMENT

Modern artificial intelligence (AI) systems are increasingly deployed in sensitive domains like healthcare, finance and autonomous technologies, where reliable decision-making is crucial. Today, most approaches to AI reliability focus on how confident the system is in its predictions. Methods such as Bayesian networks, dropout-based approximations and ensemble learning attempt to quantify uncertainty, mainly by analyzing the output probabilities. However, these techniques often overlook a key issue: whether the internal representation of the input itself is meaningful or clear. A model might feel “confident” in its output, while internally holding a confused or distorted view of what it was asked to process.

To address this gap, we propose a new approach inspired by the work of philosopher Frank P. Ramsey. In an unpublished manuscript preserved in the Frank P. Ramsey Papers (University of Pittsburgh Archives of Scientific Philosophy, Box 2, Folder 24), he introduced the idea of blurredness. i.e., the notion that a belief may be unclear not because the object is uncertain, but because the representation of that object is internally vague or unfocused. He was among the first to shift attention away from what is known toward how clearly it is represented.  This insight can be translated into the realm of machine learning: even if an AI model is technically confident, it may still be acting on “blurry” internal perceptions.  Building on this idea, a method can be built aiming to quantify representational clarity inside a model, focusing on how input data are structured in the model’s latent (hidden) space.  Two main metrics can be proposed:

  1. Prototype Alignment – This measures how closely a model’s internal representation of an input matches the central example of its predicted class.
  2. Latent Distributional Width – This captures how tightly or loosely clustered similar inputs are, indicating the precision or “blur” of internal categories.

Unlike traditional uncertainty methods, this approach works inside the network, offering a diagnostic tool helping identify whether the model’s “understanding” is structurally sound.  This internal perspective has several potential advantages. It may enable earlier detection of model failures, reveal why models perform poorly under certain conditions and guide training toward more robust internal reasoning.  It also opens new experimental directions: for instance, testing whether models with high clarity scores resist adversarial examples better, or whether clarity metrics can predict generalization on unseen data.

By incorporating Ramsey’s philosophical insight into AI, we gain a richer understanding of what it means for a machine to know clearly—not just predict correctly.

 

QUOTE AS:  Tozzi, Arturo. 2025. Blurredness as Representational Failure: Ramsey-Inspired Metrics for Internal Clarity in AI Judgment. July. https://doi.org/10.13140/RG.2.2.28329.51042.

 

 

 

A PRETOPOLOGICAL APPROACH TO MODAL REASONING: A NEW LOGICAL APPROACH IN CONTEXTS WHERE ONLY PARTIAL OR LOCALLY AVAILABLE KNOWLEDGE IS RELEVANT

In many scientific and technological settings—from distributed computing and sensor networks to biological systems and artificial intelligence—agents must make decisions based on partial, local, or uncertain information. Traditional modal logic has long been used to reason about such knowledge and possibility, often relying on structures like Kripke frames or generalized neighborhood models. However, these existing methods either assume too much global structure (as in Kripkean accessibility relations) or allow too much arbitrariness (as in unconstrained neighborhood semantics), which can make them unsuitable for systems grounded in limited, local knowledge.

To bridge this gap, we propose a new method: Pretopologically-Neighborhood Modal Logic (PNML). This approach grounds modal reasoning in pretopological spaces, a mathematical framework that maintains minimal but essential structure. In pretopology, each point (or world) is assigned a set of neighborhoods that satisfy two simple rules: they must include the point itself and be upward closed (i.e., containing larger sets if they contain a smaller one). Unlike traditional topology, this framework does not require closure under intersection—making it flexible enough to model fragmented, disjoint, or observer-relative perspectives.  In PNML, modal statements are interpreted locally: something is “necessarily true” at a point only if it is true across at least one of its valid neighborhoods. This reflects how real agents (such as cells, nodes in a network, or decision-makers) operate with local snapshots of their environments. Unlike many standard modal systems, PNML avoids the rule of necessitation and does not enforce strong global inference rules. This allows it to represent weak, non-normal, and dynamic forms of reasoning under uncertainty.

The novelty of this logic lies in its balance between minimal structure and formal rigor. It offers a logically complete, semantically sound framework that respects informational constraints without sacrificing analytic clarity. Compared to topological logics, it is less rigid; compared to standard neighborhood models, it is more disciplined.  PNML opens new research directions and applications. In biology, it could model how cells make decisions based on local signaling. In computing, it can capture agent-based protocols with restricted information access. Future research might include modal updates over time, multi-agent extensions, and formal proof systems aligned with PNML’s structure. Experiments could compare PNML’s modeling accuracy with classical logic in systems like belief revision or distributed diagnostics, testing its advantage in scenarios of local, partial observability.

In short, PNML offers a fresh way to model reasoning in real-world systems where knowledge is local, partial, and structured—yet never fully global.

QUOTE AS: Tozzi A. 2025. Pretopological Modal Logic for Local Reasoning and Uncertainty. Preprints. https://doi.org/10.20944/preprints202506.1902.v1