Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published 2009 | public
Book Section - Chapter

Probability Logic, Model Uncertainty and Robust Predictive System Analysis

Abstract

Probability can be viewed as a multi-valued logic that extends binary Boolean propositional logic to the case of incomplete information. The key idea is that the probability P[b/c] of a proposition (statement) b, given the information in proposition c, is a measure of how plausible b is based on c. Boolean logic deals with the special case of complete information where the truth or falsity of b is known from c. R.T~ Cox was the first to derive the probability logic axioms from those of Boolean logic but this interpretation of probability has a long history and has received increasing interest recently, especially due to E.T Jaynes who emphasized its connection with the information-theoretic ideas stemming from C.E. Shannon's work. It is consistent with the Bayesian point of view that probability represents a degree of belief in a proposition. A.N. Kolmogorov's statement of the probability axioms, which are neutral with respect to the interpretation of probability, can be viewed as a special case where the statements refer to uncertain membership of an object in a set. Probability logic provides a rigorous unifying framework for treating modeling uncertainty, along with excitation uncertainty, when using models to predict the response of a real system. An overview is given of the foundations of probability logic and its application to quantifying uncertainty for robust predictive analysis of systems. A key concept is a stochastic system model class, which consists of a set of probabilistic input-output predictive models for a system together with a chosen probability distribution, the prior, over this set that quantifies the initial relative plausibility of each predictive model in 473 the set. These probability models are viewed as representing a state of knowledge about the real system, conditional on the available information, and not as inherent properties of the system. Any set of dynamic models for a system can be used to construct a model class through stochastic embedding in which Jaynes' Principle of Maximum Information Entropy plays an important role in establishing the fundamental probability models. A model class can be used to create both prior (initial) and posterior (updated using system data) robust predictive models based purely on the probability axioms. Since there is always uncertainty in choosing a model class to represent a system, one can choose a set of candidate model classes. The probability axioms then lead naturally to prior and posterior hyper-robust predictions that combine the predictions of all model classes in the set. The posterior probabilities of the candidate model classes also give a measure of their relative plausibility based on the data and so can be used for model class selection. In applications, integrals over high-dimensional parameter spaces are usually involved that cannot be evaluated in a straight-forward way. Useful computational tools are Laplace's method of asymptotic approximation and Markov Chain Monte Carlo (MCMC) methods such as the Metropolis-Hastings, Gibbs sampler and Hybrid Monte Carlo algorithms. Parameter estimation, which selects just one predictive model in the model class, is justified via Laplace's method if the class is globally identifiable based on the data; if not, a posterior robust predictive analysis should be performed using an MCMC method that combines the probabilistic system predictions from each model in the model class.

Additional Information

©2010 Taylor& Francis Group, London.

Additional details

Created:
August 20, 2023
Modified:
October 18, 2023