什么是基数| 婚检是什么| 牛油果和什么不能一起吃| hiv是什么意思| 宫保鸡丁属于什么菜系| 医保报销需要什么材料| 什么是活性叶酸| 10mg是什么意思| 1288是什么意思| 牙疼可以吃什么| 眼花缭乱的意思是什么| 唾液分泌过多是什么原因| 什么降血压效果最好| 葵水是什么| 男士蛋皮痒用什么药| 脸上长湿疹是什么原因| 嗓子疼吃什么药效果最好| 买买提是什么意思| 1月9号是什么星座| hairy什么意思| 男性小便出血是什么原因| 鸡肉配什么菜好吃| 低密度脂蛋白高的原因是什么| 慢性肠炎有什么症状| 味淋是什么调料| 牛b克拉斯什么意思| 喝椰子汁有什么好处| 灰指甲医院挂什么科| 恪尽职守什么意思| 什么头什么耳| 消化不良吃什么水果| 女鼠配什么属相最好| 什么蓝| 吃桑葚对身体有什么好处| 强直性脊柱炎看什么科| 桑蚕丝用什么洗最好| 为什么高铁没有e座| coach是什么牌子的包| 对数是什么意思| 下面潮湿是什么原因引起的| 胶体金法是什么意思| 什么是横纹肌溶解症| 三月二十二是什么星座| 吃什么对皮肤好| 乳腺小叶增生是什么意思| 大便变黑是什么原因| 为什么种牙那么贵| 经常发低烧是什么原因| 什么的狼| 为难的难是什么意思| 郁郁寡欢什么意思| 2h是什么意思| 八面玲珑什么生肖| 吃什么可以快速排便| 金银花有什么功效和作用| PPm什么意思| 辟邪剑法为什么要自宫| 验孕棒什么时候测最准确| 脾胃虚弱吃什么药调理| 胃糜烂是什么原因引起的| 1026什么星座| 老年痴呆症又叫什么| 红豆是什么意思| 试管进周期是什么意思| 什么如什么| 智利说什么语言| 女人取环什么时候最好| 16岁属什么| 胡子发黄是什么原因| 脑白质疏松是什么病| 幻听是什么原因引起的| 胸闷气短吃什么药| 12年义务教育什么时候开始| 生理盐水和食用盐水有什么区别| 上海松江有什么好玩的地方| 大姨妈来了喝什么好| 雪莲菌泡牛奶有什么功效| 西柚不能和什么一起吃| ufo是什么意思| 皮疹是什么原因引起的| 蒲公英泡水喝有什么好处| 榴莲什么时候成熟| 龙的五行属性是什么| 补骨脂是什么东西| 蒂芙尼属于什么档次| 双胞胎是什么意思| 红斑狼疮是什么引起的| 女性尿检能查出什么病| 人中长代表什么| 牙肿了吃什么消炎药| 猪古代叫什么| 落枕是什么意思| s是什么m是什么| 赤道2什么时候上映| 蓬蒿人是什么意思| 缺黄体酮会有什么症状| 生化检查能查出什么病| 石家庄为什么叫国际庄| 什么烟| 治疗白头发挂什么科| 梦寐以求是什么意思| bv是什么意思| 复健是什么意思| 皇太后是皇上的什么人| 喝酒之前吃什么保护胃| 4月出生是什么星座| 冠脉造影是什么意思| 浅表性胃炎吃什么药好使| 1926年属什么| a型血和a型血生的孩子是什么血型| 排骨炖什么| 三高人群适合吃什么水果| 下午5点半是什么时辰| 馒头是什么做的| jhs空调是什么牌子| 晚饭吃什么| 检查肠道挂什么科| 穆斯林是什么| 虎头虎脑是什么生肖| 卧底归来大结局是什么| 痔疮有什么特征| 什么是外心| 常吃木耳有什么好处和坏处| up主是什么意思| pr间期缩短是什么意思| 为什么会得卵巢肿瘤| 乳腺纤维瘤是什么原因引起的| 绞股蓝有什么作用| 一个山一个见读什么| 不动明王是什么意思| 2005年什么年| 小排畸主要检查什么| 红男绿女是什么生肖| 葛根主治什么病| 萎缩性胃炎吃什么食物好| 印度以什么人种为主| 逆钟向转位什么意思| 工厂体检一般检查什么| 窦性心律不齐什么意思| 什么牌子的点读机好| 27岁属什么| 淋巴结炎吃什么药| 恚是什么意思| 三月初什么星座| 木有什么意思| 现在钱为什么这么难挣| 四级警长是什么级别| 尿结晶高是什么原因| 檄文是什么意思| 气血不足吃什么食物| 妈妈的堂哥叫什么| 为什么肚子疼| 球蛋白是什么意思| 黄芪加陈皮有什么功效| 牛肉不能和什么一起吃| 3月29日是什么星座| 坎坷人生是什么生肖| 嘴巴干是什么原因| 尖嘴是什么生肖| 信指什么生肖| 什么是小三| 梦见小白兔是什么意思| 白细胞低是什么原因| 绿色通道是什么意思| 头发湿着睡觉有什么害处| 为什么人会得抑郁症| 为什么胃疼| 梦见已故老人是什么预兆| 淋巴细胞减少说明什么| ab型血可以接受什么血型| vcr什么意思| 孕妇要吃什么| 眼睛总是流泪是什么原因| 一月20号是什么星座| 7.7什么星座| 知更鸟是什么意思| 痔疮复发的原因是什么| 螃蟹不能跟什么一起吃| 什么是男人| 血氧是什么意思| fy是什么意思| 阁字五行属什么| 心态好是什么意思| 白醋和白米醋有什么区别| bv是什么品牌| 勾芡用什么粉最好| 女人性高潮是什么感觉| siemens是什么品牌| 痈是什么意思| 低gi什么意思| 什么水果寒凉性| 什么防晒霜防晒效果好| 118号是什么星座| 胸疼什么原因| 消化功能紊乱吃什么药| 小甲鱼吃什么| 此地无银三百两什么意思| 凤雏是什么意思| 副县长什么级别| 努力的意义是什么| 小蜗牛吃什么| 报工伤需要什么材料| 大便不成形用什么药| 骨髓捐赠对自己有什么影响没有| 喝啤酒有什么好处| 钙片什么时候吃最好吸收| 螺蛳粉为什么那么臭| 吃火龙果对身体有什么好处| 气虚用什么泡水喝好| 甲醛什么味| XX是什么意思| 洋辣子蛰了用什么药| 什么叫佛系| chop是什么意思| 为什么会中暑| 父母都是b型血孩子是什么血型| 处女膜破了有什么影响| 荨麻疹用什么药最好| 为什么会一直流鼻涕| 跳蛋什么意思| 什么茶好喝又对身体好| 萝卜喝醉了会变成什么| 脸发红是什么原因| eland是什么牌子| 无名指下面的竖线代表什么| 角化型脚气用什么药| 梵天是什么意思| 抄经书有什么好处| 八字比肩是什么意思| 孩子咽炎老是清嗓子吃什么药| 坐飞机不能带什么| d二聚体是检查什么的| 塑造是什么意思| 因果报应是什么意思| 世界上最小的国家是什么| 外冷内热是什么症状| 眼痒痒是什么原因引起| 慢性咽炎是什么症状| 烧仙草是什么植物| 立春是什么意思| svip和vip有什么区别| 泰坦尼克号女主角叫什么| 被老鼠咬了有什么预兆| 什么病误诊为帕金森| 5月11号是什么星座| 打牌老是输是什么原因| 经血是什么血| 一般什么人会有美人尖| 保家卫国是什么生肖| 月光族是什么意思啊| 女人小肚子疼是什么原因| 吊销是什么意思| 梗塞灶是什么意思| 常温是什么意思| 天德合是什么意思| 软开是什么| 脚板肿是什么原因引起的| 汗疱疹用什么药膏| 给老人买什么礼物| disease是什么意思| 标准差是什么| 虚火是什么意思| 不排大便是什么原因| 宫颈那囊什么意思| 百度Jump to content

【以案说法】在自家门前安装监控摄像头侵害邻居隐私

From Wikipedia, the free encyclopedia
百度 (篱笆)

In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment.[1][2] It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space).[3]

For instance, if X is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of X would take the value 0.5 (1 in 2 or 1/2) for X = heads, and 0.5 for X = tails (assuming that the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values.

Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names.

Introduction

[edit]

A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often represented in notation by is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc. For example, the sample space of a coin flip could be Ω = {"heads", "tails"}.

To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and continuous random variables. In the discrete case, it is sufficient to specify a probability mass function assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits “1” to “6”, corresponding to the number of dots on the die, has probability The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0.

For example, consider measuring the weight of a piece of ham in the supermarket, and assume the scale can provide arbitrarily many digits of precision. Then, the probability that it weighs exactly 500 g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level.

However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability. This is possible because this measurement does not require as much precision from the underlying equipment.

Figure 1: The left graph shows a probability density function. The right graph shows the cumulative distribution function. The value at a in the cumulative distribution equals the area under the probability density curve up to the point a.

Continuous probability distributions can be described by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., P(Xx) for some x. The cumulative distribution function is the area under the probability density function from -∞ to x, as shown in figure 1.[4]

Most continuous probability distributions encountered in practice are not only continuous but also absolutely continuous. Such distributions can be described by their probability density function. Informally, the probability density of a random variable describes the infinitesimal probability that takes any value — that is as becomes is arbitrarily small. The probability that lies in a given interval can be computed rigorously by integrating the probability density function over that interval.[5]

General probability definition

[edit]

Let be a probability space, be a measurable space, and be a -valued random variable. Then the probability distribution of is the pushforward measure of the probability measure onto induced by . Explicitly, this pushforward measure on is given by for

Any probability distribution is a probability measure on (in general different from , unless happens to be the identity map).[citation needed]

A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function whose input space is a σ-algebra, and gives a real number probability as its output, particularly, a number in .

The probability function can take as argument subsets of the sample space itself, as in the coin toss example, where the function was defined so that P(heads) = 0.5 and P(tails) = 0.5. However, because of the widespread use of random variables, which transform the sample space into a set of numbers (e.g., , ), it is more common to study probability distributions whose argument are subsets of these particular kinds of sets (number sets),[6] and all probability distributions discussed in this article are of this type. It is common to denote as the probability that a certain value of the variable belongs to a certain event .[7][8]

The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is:

  1. , so the probability is non-negative
  2. , so no probability exceeds
  3. for any countable disjoint family of sets

The concept of probability function is made more rigorous by defining it as the element of a probability space , where is the set of possible outcomes, is the set of all subsets whose probability can be measured, and is the probability function, or probability measure, that assigns a probability to each of these measurable subsets .[9]

Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function.[7][5][8] The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.

A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution.

Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function.[10]

Figure 2: The probability density function (pdf) of the normal distribution, also called Gaussian or "bell curve", the most important absolutely continuous random distribution. As notated on the figure, the probabilities of intervals of values correspond to the area under the curve.

Terminology

[edit]

Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below.[1]

Basic terms

[edit]
  • Random variable: takes values from a sample space; probabilities describe which values and set of values are more likely taken.
  • Event: set of possible values (outcomes) of a random variable that occurs with a certain probability.
  • Probability function or probability measure: describes the probability that the event occurs.[11]
  • Cumulative distribution function: function evaluating the probability that will take a value less than or equal to for a random variable (only for real-valued random variables).
  • Quantile function: the inverse of the cumulative distribution function. Gives such that, with probability , will not exceed .

Discrete probability distributions

[edit]

Absolutely continuous probability distributions

[edit]
  • Absolutely continuous probability distribution: for many random variables with uncountably many values.
  • Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample.
[edit]
  • Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable. For a random variable , it is sometimes denoted as .
  • Tail:[12] the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form , or a union thereof.
  • Head:[12] the region where the pmf or pdf is relatively high. Usually has the form .
  • Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof.
  • Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half.
  • Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak.
  • Quantile: the q-quantile is the value such that .
  • Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution.
  • Standard deviation: the square root of the variance, and hence another measure of dispersion.
  • Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right.
  • Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution.
  • Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution.

Cumulative distribution function

[edit]

In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable with regard to a probability distribution is defined as

The cumulative distribution function of any real-valued random variable has the properties:

  • is non-decreasing;
  • is right-continuous;
  • ;
  • and ; and
  • .

Conversely, any function that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers.[13]

Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution,[14] and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions.

Discrete probability distribution

[edit]
Figure 3: The probability mass function (pmf) specifies the probability distribution for the sum of counts from two dice. For example, the figure shows that . The pmf allows the computation of probabilities of events such as , and all other probabilities in the distribution.
Figure 4: The probability mass function of a discrete probability distribution. The probabilities of the singletons {1}, {3}, and {7} are respectively 0.2, 0.5, 0.3. A set not containing any of these points has probability zero.
Figure 5: The cdf of a discrete probability distribution, ...
Figure 6: ... of a continuous probability distribution, ...
Figure 7: ... of a distribution which has both a continuous part and a discrete part

A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values[15] (almost surely)[16] which means that the probability of any event can be expressed as a (finite or countably infinite) sum: where is a countable set with . Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if for , the sum of probabilities would be .

Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution.[3] When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices.

Cumulative distribution function

[edit]

A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take. Thus the cumulative distribution function has the form The points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers.

Dirac delta representation

[edit]

A discrete probability distribution is often represented with Dirac measures, also called one-point distributions (see below), the probability distributions of deterministic random variables. For any outcome , let be the Dirac measure concentrated at . Given a discrete probability distribution, there is a countable set with and a probability mass function . If is any event, then or in short,

Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function , where which means for any event [17]

Indicator-function representation

[edit]

For a discrete random variable , let be the values it can take with non-zero probability. Denote These are disjoint sets, and for such sets It follows that the probability that takes any value except for is zero, and thus one can write as except on a set of probability zero, where is the indicator function of . This may serve as an alternative definition of discrete random variables.

One-point distribution

[edit]

A special case is the discrete distribution of a random variable that can take on only one fixed value, in other words, a Dirac measure. Expressed formally, the random variable has a one-point distribution if it has a possible outcome such that [18] All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 before to 1 at . It is closely related to a deterministic distribution, which cannot take on any other value, while a one-point distribution can take other values, though only with probability 0. For most practical purposes the two notions are equivalent.

Absolutely continuous probability distribution

[edit]

An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral.[19] More precisely, a real random variable has an absolutely continuous probability distribution if there is a function such that for each interval the probability of belonging to is given by the integral of over :[20][21] This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function. In particular, the probability for to take any single value (that is, ) is zero, because an integral with coinciding upper and lower limits is always equal to zero. If the interval is replaced by any measurable set , the according equality still holds:

An absolutely continuous random variable is a random variable whose probability distribution is absolutely continuous.

There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others.

Cumulative distribution function

[edit]

Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function has the form where is a density of the random variable with regard to the distribution .

Note on terminology: Absolutely continuous distributions ought to be distinguished from continuous distributions, which are those having a continuous cumulative distribution function. Every absolutely continuous distribution is a continuous distribution but the inverse is not true, there exist singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions.[7]

For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure.

Kolmogorov definition

[edit]

In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function from a probability space to a measurable space . Given that probabilities of events of the form satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure of , which is a probability measure on satisfying .[22][23][24]

Other kinds of distributions

[edit]
Figure 8: One solution for the Rabinovich–Fabrikant equations. What is the probability of observing a state on a certain place of the support (i.e., the red subset)?

Absolutely continuous and discrete distributions with support on or are extremely useful to model a myriad of phenomena,[7][4] since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves within some space or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it.[25]

One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma.[26] When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system.[27][25]

This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let be instants in time and a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set would be equal in interval and , which might not happen; for example, it could oscillate similar to a sine, , whose limit when does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future.[28] The branch of dynamical systems that studies the existence of a probability measure is ergodic theory.

Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively.

Random number generation

[edit]

Most algorithms are based on a pseudorandom number generator that produces numbers that are uniformly distributed in the half-open interval [0, 1). These random variates are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated.[29]

For example, suppose U has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some 0 < p < 1, define We thus have Therefore, the random variable X has a Bernoulli distribution with parameter p.[29]

This method can be adapted to generate real-valued random variables with any distribution: for be any cumulative distribution function F, let Finv be the generalized left inverse of also known in this context as the quantile function or inverse distribution function: Then, Finv(p) ≤ x if and only if pF(x). As a result, if U is uniformly distributed on [0, 1], then the cumulative distribution function of X = Finv(U) is F.

For example, suppose we want to generate a random variable having an exponential distribution with parameter — that is, with cumulative distribution function so , and if U has a uniform distribution on [0, 1) then has an exponential distribution with parameter [29]

Although from a theoretical point of view this method always works, in practice the inverse distribution function is unknown and/or cannot be computed efficiently. In this case, other methods (such as the Monte Carlo method) are used.

Common probability distributions and their applications

[edit]

The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.

The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, absolutely continuous, multivariate, etc.)

All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.

Linear growth (e.g. errors, offsets)

[edit]
  • Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used absolutely continuous distribution

Exponential growth (e.g. prices, incomes, populations)

[edit]

Uniformly distributed quantities

[edit]

Bernoulli trials (yes/no events, with a given probability)

[edit]

Categorical outcomes (events with K possible outcomes)

[edit]

Poisson process (events that occur independently with a given rate)

[edit]

Absolute values of vectors with normally distributed components

[edit]
  • Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.
  • Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.

Normally distributed quantities operated with sum of squares

[edit]

As conjugate prior distributions in Bayesian inference

[edit]

Some specialized applications of probability distributions

[edit]
  • The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
  • In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by , probability that the particle's position x will be in the interval axb in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics.[31]
  • Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provides the power flow calculation also in term of probability distribution.[32]
  • Prediction of natural phenomena occurrences based on previous frequency distributions such as tropical cyclones, hail, time in between events, etc.[33]

Fitting

[edit]

Probability distribution fitting or simply distribution fitting is the fitting of a probability distribution to a series of data concerning the repeated measurement of a variable phenomenon. The aim of distribution fitting is to predict the probability or to forecast the frequency of occurrence of the magnitude of the phenomenon in a certain interval.

There are many probability distributions (see list of probability distributions) of which some can be fitted more closely to the observed frequency of the data than others, depending on the characteristics of the phenomenon and of the distribution. The distribution giving a close fit is supposed to lead to good predictions.

In distribution fitting, therefore, one needs to select a distribution that suits the data well.

See also

[edit]

Lists

[edit]

References

[edit]

Citations

[edit]
  1. ^ a b Everitt, Brian (2006). The Cambridge dictionary of statistics (3rd ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0-511-24688-3. OCLC 161828328.
  2. ^ Ash, Robert B. (2008). Basic probability theory (Dover ed.). Mineola, N.Y.: Dover Publications. pp. 66–69. ISBN 978-0-486-46628-6. OCLC 190785258.
  3. ^ a b Evans, Michael; Rosenthal, Jeffrey S. (2010). Probability and statistics: the science of uncertainty (2nd ed.). New York: W.H. Freeman and Co. p. 38. ISBN 978-1-4292-2462-8. OCLC 473463742.
  4. ^ a b Dekking, Michel (1946–) (2005). A Modern Introduction to Probability and Statistics : Understanding why and how. London, UK: Springer. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: numeric names: authors list (link)
  5. ^ a b "1.3.6.1. What is a Probability Distribution". www.itl.nist.gov. Retrieved 2025-08-06.
  6. ^ Walpole, R.E.; Myers, R.H.; Myers, S.L.; Ye, K. (1999). Probability and statistics for engineers. Prentice Hall.
  7. ^ a b c d Ross, Sheldon M. (2010). A first course in probability. Pearson.
  8. ^ a b DeGroot, Morris H.; Schervish, Mark J. (2002). Probability and Statistics. Addison-Wesley.
  9. ^ Billingsley, P. (1986). Probability and measure. Wiley. ISBN 9780471804789.
  10. ^ Shephard, N.G. (1991). "From characteristic function to distribution function: a simple framework for the theory". Econometric Theory. 7 (4): 519–529. doi:10.1017/S0266466600004746. S2CID 14668369.
  11. ^ Chapters 1 and 2 of Vapnik (1998)
  12. ^ a b More information and examples can be found in the articles Heavy-tailed distribution, Long-tailed distribution, fat-tailed distribution
  13. ^ Erhan, ??nlar (2011). Probability and stochastics. New York: Springer. p. 57. ISBN 9780387878584.
  14. ^ see Lebesgue's decomposition theorem
  15. ^ Erhan, ??nlar (2011). Probability and stochastics. New York: Springer. p. 51. ISBN 9780387878591. OCLC 710149819.
  16. ^ Cohn, Donald L. (1993). Measure theory. Birkh?user.
  17. ^ Khuri, André I. (March 2004). "Applications of Dirac's delta function in statistics". International Journal of Mathematical Education in Science and Technology. 35 (2): 185–195. doi:10.1080/00207390310001638313. ISSN 0020-739X. S2CID 122501973.
  18. ^ Fisz, Marek (1963). Probability Theory and Mathematical Statistics (3rd ed.). John Wiley & Sons. p. 129. ISBN 0-471-26250-1. {{cite book}}: ISBN / Date incompatibility (help)
  19. ^ Jeffrey Seth Rosenthal (2000). A First Look at Rigorous Probability Theory. World Scientific.
  20. ^ Chapter 3.2 of DeGroot & Schervish (2002)
  21. ^ Bourne, Murray. "11. Probability Distributions - Concepts". www.intmath.com. Retrieved 2025-08-06.
  22. ^ W., Stroock, Daniel (1999). Probability theory : an analytic view (Rev. ed.). Cambridge [England]: Cambridge University Press. p. 11. ISBN 978-0521663496. OCLC 43953136.{{cite book}}: CS1 maint: multiple names: authors list (link)
  23. ^ Kolmogorov, Andrey (1950) [1933]. Foundations of the theory of probability. New York, USA: Chelsea Publishing Company. pp. 21–24.
  24. ^ Joyce, David (2014). "Axioms of Probability" (PDF). Clark University. Retrieved December 5, 2019.
  25. ^ a b Alligood, K.T.; Sauer, T.D.; Yorke, J.A. (1996). Chaos: an introduction to dynamical systems. Springer.
  26. ^ Rabinovich, M.I.; Fabrikant, A.L. (1979). "Stochastic self-modulation of waves in nonequilibrium media". J. Exp. Theor. Phys. 77: 617–629. Bibcode:1979JETP...50..311R.
  27. ^ Section 1.9 of Ross, S.M.; Pek?z, E.A. (2007). A second course in probability (PDF).
  28. ^ Walters, Peter (2000). An Introduction to Ergodic Theory. Springer.
  29. ^ a b c Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuha?, Hendrik Paul; Meester, Ludolf Erwin (2005), "Why probability and statistics?", A Modern Introduction to Probability and Statistics, Springer London, pp. 1–11, doi:10.1007/1-84628-168-7_1, ISBN 978-1-85233-896-1
  30. ^ Bishop, Christopher M. (2006). Pattern recognition and machine learning. New York: Springer. ISBN 0-387-31073-8. OCLC 71008143.
  31. ^ Chang, Raymond. (2014). Physical chemistry for the chemical sciences. Thoman, John W., Jr., 1960-. [Mill Valley, California]. pp. 403–406. ISBN 978-1-68015-835-9. OCLC 927509011.{{cite book}}: CS1 maint: location missing publisher (link)
  32. ^ Chen, P.; Chen, Z.; Bak-Jensen, B. (April 2008). "Probabilistic load flow: A review". 2008 Third International Conference on Electric Utility Deregulation and Restructuring and Power Technologies. pp. 1586–1591. doi:10.1109/drpt.2008.4523658. ISBN 978-7-900714-13-8. S2CID 18669309.
  33. ^ Maity, Rajib (2025-08-06). Statistical methods in hydrology and hydroclimatology. Singapore. ISBN 978-981-10-8779-0. OCLC 1038418263.{{cite book}}: CS1 maint: location missing publisher (link)

Sources

[edit]
[edit]
国帑是什么意思 咽后壁淋巴滤泡增生吃什么药 什么发型好看 喝什么茶降血压 一什么浮萍
爱的本质是什么 多吃木耳有什么好处和坏处 什么叫散光 9月份出生的是什么星座 血糖高不能吃什么
晚上9点是什么时辰 子宫低回声结节是什么意思 什么药对伤口愈合快 金丝檀木是什么木 茔是什么意思
什么叫遗精 西瓜禁忌和什么一起吃 蚯蚓吃什么 总胆固醇偏高是什么原因 马躺下睡觉为什么会死
气血两亏是什么意思hcv8jop9ns2r.cn 懦弱的近义词是什么xinjiangjialails.com 穷凶极恶是什么生肖hcv8jop8ns0r.cn 狂狷是什么意思hcv8jop8ns1r.cn 游字五行属什么hcv8jop7ns2r.cn
耳根有痣代表什么hebeidezhi.com 用什么泡水喝对肝脏好hcv9jop8ns2r.cn 双肺条索是什么意思hcv9jop6ns0r.cn 孩子脾虚内热大便干吃什么药hcv9jop6ns6r.cn 腹股沟黑是什么原因hcv8jop0ns7r.cn
为的笔顺是什么hcv9jop8ns0r.cn 苍蝇喜欢什么味道imcecn.com 胃溃疡是什么症状1949doufunao.com 离婚都需要什么手续和证件hcv9jop6ns7r.cn 小孩嘴唇发红是什么原因sscsqa.com
痛风能吃什么菜hcv8jop9ns6r.cn 腹泻吃什么水果hcv9jop5ns6r.cn 什么叫性生活hcv8jop6ns7r.cn 为什么会梦见前男友hcv9jop5ns5r.cn 跳梁小丑指什么生肖hcv8jop2ns6r.cn
百度