Next Article in Journal
Penalized Exponentially Tilted Likelihood for Growing Dimensional Models with Missing Data
Previous Article in Journal
As One and Many: Relating Individual and Emergent Group-Level Generative Models in Active Inference
Previous Article in Special Issue
Entropy and Complexity Tools Across Scales in Neuroscience: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

The Epistemic Uncertainty Gradient in Spaces of Random Projections

1
Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Onna 904-0495, Japan
2
Institut für Robotik und Prozessinformatik, Technische Universität Braunschweig, 38106 Braunschweig, Germany
3
Theoretical Sciences Visiting Program (TSVP), Okinawa Institute of Science and Technology Graduate University, Onna 904-0495, Japan
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(2), 144; https://doi.org/10.3390/e27020144
Submission received: 25 November 2024 / Revised: 24 January 2025 / Accepted: 25 January 2025 / Published: 1 February 2025

Abstract

This work presents a novel approach to handling epistemic uncertainty estimates with motivation from Bayesian linear regression. We propose treating the model-dependent variance in the predictive distribution—commonly associated with epistemic uncertainty—as a model for the underlying data distribution. Using high-dimensional random feature transformations, this approach allows for a computationally efficient, parameter-free representation of arbitrary data distributions. This allows assessing whether a query point lies within the distribution, which can also provide insights into outlier detection and generalization tasks. Furthermore, given an initial input, minimizing the uncertainty using gradient descent offers a new method of querying data points that are close to the initial input and belong to the distribution resembling the training data, much like auto-completion in associative networks. We extend the proposed method to applications such as local Gaussian approximations, input–output regression, and even a mechanism for unlearning of data. This reinterpretation of uncertainty, alongside the geometric insights it provides, offers an innovative and novel framework for addressing classical machine learning challenges.
Keywords: associative memory; probabilistic; epistemic uncertainty; unlearning; one shot; iterative; regression associative memory; probabilistic; epistemic uncertainty; unlearning; one shot; iterative; regression

Share and Cite

MDPI and ACS Style

Queißer, J.F.; Tani, J.; Steil, J.J. The Epistemic Uncertainty Gradient in Spaces of Random Projections. Entropy 2025, 27, 144. https://doi.org/10.3390/e27020144

AMA Style

Queißer JF, Tani J, Steil JJ. The Epistemic Uncertainty Gradient in Spaces of Random Projections. Entropy. 2025; 27(2):144. https://doi.org/10.3390/e27020144

Chicago/Turabian Style

Queißer, Jeffrey F., Jun Tani, and Jochen J. Steil. 2025. "The Epistemic Uncertainty Gradient in Spaces of Random Projections" Entropy 27, no. 2: 144. https://doi.org/10.3390/e27020144

APA Style

Queißer, J. F., Tani, J., & Steil, J. J. (2025). The Epistemic Uncertainty Gradient in Spaces of Random Projections. Entropy, 27(2), 144. https://doi.org/10.3390/e27020144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop