麒麟什么意思| 鲤鱼吃什么食物| 7月30号是什么星座| 肥波是什么品种的猫| 内秀是什么性格的人| 狐臭挂什么科室的号| 团长一般是什么军衔| 药食同源什么意思| 吃什么食物对心脏好| 美国的国花是什么花| 1958年属狗的是什么命| 7月28是什么星座| 狂风暴雨是什么意思| 生二胎需要什么手续| prog是什么意思| 抚今追昔的意思是什么| 女性尿酸高有什么症状表现| 铁蛋白偏高是什么原因| 投行是做什么的| 安欣是什么电视剧| 1960属什么生肖| 横眉冷对是什么意思| 吃知柏地黄丸有什么副作用| 礻字旁与什么有关| 属蛇女和什么属相最配| 山鬼是什么| 红景天是什么| 鱼油什么牌子好| 凌驾是什么意思| 强迫症是什么意思| 非经期少量出血是什么原因| bl和bg是什么意思| 农村做什么生意赚钱| 亚甲蓝注射起什么作用| 紫苏有什么功效与作用| 榨菜是什么菜| 王牌是什么意思| 降钙素原检测是查什么的| 35是什么意思| 勾心斗角是什么意思| 夏天可以种什么花| 肛门坠胀是什么原因| 锦衣卫是干什么的| 和南圣众是什么意思| 免疫力低挂什么科| 常喝枸杞泡水有什么好处| 热得像什么| 什么的微风填空| 10月16是什么星座| 鸡肉配什么菜好吃| 现在吃什么水果| 下午2点是什么时辰| 迁单是什么意思| 淋巴细胞浸润是什么意思| 在圣是什么生肖| 无纺布是什么材料做的| 泡鲁达是什么| 一个月一个屯念什么| 24k镀金是什么意思| 小s和黄子佼为什么分手| 荡漾是什么意思| 龙的本命佛是什么佛| 葫芦代表什么寓意| 险资举牌什么意思| 大手牵小手是什么菜| 牛和什么属相相冲| 呛是什么意思| 复方北豆根氨酚那敏片是什么药| 双子座什么性格| 立秋什么意思| 胃肠功能紊乱吃什么药| 大片是什么意思| 喝什么酒不会胖| 男人吃什么补身体| 1995年属什么生肖| hrd是什么职位| 田七蒸瘦肉有什么功效| 上山下金是什么字| 肝血亏虚吃什么中成药调理| 世界上最深的湖是什么| 首长是什么意思| 1938年属什么生肖属相| 什么叫正盐| 咳嗽挂号挂什么科| 玄牝是什么意思| 年底是什么时候| 粉红的什么| 皮牙子是什么意思| 什么锅好| 三白眼是什么意思| 喝什么茶叶减肥效果最好| 师团长是什么级别| 砗磲是什么| 左边肋骨下面是什么器官| 姑息性化疗什么意思| 小儿厌食吃什么药最好| disease是什么意思| 见招拆招下一句是什么| 夜字五行属什么| 女生喜欢什么姿势| 山药长什么样| 蚊虫叮咬用什么药膏| 碳水化合物是什么东西| 窦性心律不齐是什么原因引起的| 骨折吃什么钙片| 胎儿永久性右脐静脉是什么意思| mra检查是什么意思| 夏天适合喝什么汤| 牙齿遇冷热都痛是什么原因| 蛋白酶是什么东西| 乙肝两对半45阳性是什么意思| 宫内早孕什么意思| 小壁虎的尾巴有什么作用| 舅舅的孙子叫我什么| 成人睡觉磨牙是什么原因| 农历9月17日是什么星座| 送男人什么礼物最难忘| 苹果是什么季节成熟的| 欧诗漫适合什么年龄| 唐朝为什么灭亡| 今年春节是什么时候| Ecmo医学上是什么意思| 昏什么昏什么| cpi下降意味着什么| 4月26是什么星座| 窘迫什么意思| 自信是什么| 飞蚊症是什么症状| 浑身麻是什么原因| 鼻息肉长什么样| 减肥期间早餐应该吃什么| 日本买房子需要什么条件| 梦见小孩子是什么意思| 卧室放什么驱虫最好| 为什么没人敢动景甜| pr值是什么意思| 喻字五行属什么| 看甲状腺去医院挂什么科| 大刀阔斧是什么意思| 为什么冬天容易长胖| 三月二十是什么星座| 月亮是什么生肖| 辣椒油用什么能洗掉| 斑斓什么意思| 不明原因腹痛挂什么科| 椁是什么意思| 牙痛不能吃什么东西| 阴道里面长什么样| 婴儿打嗝是什么原因| 烟花三月下扬州什么意思| 水仙什么意思| 狂犬疫苗为什么要打五针| 黄疸高有什么危害| 驿是什么意思| 欢喜冤家是什么意思| gigi 是什么意思| 赵丽颖的真名叫什么| 弱精吃什么能提高活力| 慢性心肌炎有什么症状| 2025年属什么| 凝血功能是什么意思| 信手拈来是什么意思| 性出血是什么原因造成的呢要怎么办| 92年是什么生肖| vcr什么意思| 膈应是什么意思| 耳朵后面长痘痘是什么原因| 97年五行属什么| 毛很长的狗是什么品种| gsp全称是什么| 发小是什么意思| 腰扭伤用什么药最好| 高血压吃什么助勃药好| 为什么正骨后几天越来越疼| 晚上总是做梦是什么原因引起的| 骨质密度不均匀是什么意思| 肛周脓肿是什么原因引起的| 老人生日送什么礼物好| 吃什么能流产| 乙肝核心抗体偏高是什么意思| 霜降穿什么衣服| 什么矿泉水最好| 钼靶检查是什么| 开除公职是什么意思| 竹鼠吃什么| 蛀牙是什么样子的| 生活质量是什么意思| 拉拉是什么意思| 喝苦荞茶有什么好处和坏处| 羊肉馅饺子放什么菜| 具象是什么意思| 尿液结晶是什么意思| 仁义道德是什么意思| 角加斗念什么| br是什么元素| 什么饮料最解渴| 抓兔子的狗叫什么名字| 父母坟上长树意味什么| 小女子这厢有礼了什么意思| 非经期出血是什么原因| 胃功能四项检查是什么| 民考民是什么意思| 皮粉色是什么颜色| 榴莲为什么苦| 苏慧伦为什么不老| 博爱是什么意思| 吃山竹有什么好处| 文曲星下凡是什么意思| 属猪五行属什么| 幽门梗阻是什么意思| 左侧肚脐旁边疼是什么原因| 血癌是什么原因造成的| 肚子咕咕叫是什么原因| 土豆炒什么好吃| 246是什么意思| b2b是什么| 香蕉不能和什么水果一起吃| 一飞冲天是什么生肖| 八婆什么意思| 山东人为什么那么高| 风湿是什么| 做完无痛人流需要注意什么| 脖子上长癣是什么原因| 人为什么会打哈欠| 什么的公园| 三八妇女节送老婆什么礼物好| 枸杞加红枣泡水喝有什么功效| 前列腺增生用什么药好| 止血芳酸又叫什么| 半夜三更是什么生肖| 灵魂是什么意思| 什么照片看不出照的是谁| 备孕前需要做什么检查| 什么牌子的裤子质量好| 宫颈口大是什么原因| 起床气是什么意思| 家奴是什么生肖| 什么叫间质瘤| 综合是什么意思| 为什么会感染真菌| 阳痿吃什么药好| 什么是肺炎| 房颤是什么意思| 蒲公英泡水喝有什么好处| 百香果是什么季节的水果| 梦见种玉米是什么意思| 血压低什么症状| 手掌心经常出汗是什么原因| 什么是碱性食物| 穷的生肖指什么生肖| 李小龙属什么生肖| 新生儿血糖低是什么原因| 宫颈切片检查是什么| 10月28日什么星座| 夜夜笙歌什么意思| 什么心什么心| 突然想吐是什么原因| 黑色的蜂是什么蜂| 为什么精子射不出来| 贫血什么意思| 黄体酮低吃什么补得快| psp是什么| 尿浑浊是什么原因| 以色列是什么人种| 百度Jump to content

河南新密交通运输局执法大队开展路域环境专项

From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
百度 这一规定若能落地,想必会缓解学生学习时间上的比拼。

Box plot and probability density function of a normal distribution N(0,?σ2).
Geometric visualisation of the mode, median and mean of an arbitrary unimodal probability density function.[1]

In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample.[2][3] Probability density is the probability per unit length, in other words. While the absolute likelihood for a continuous random variable to take on any particular value is zero, given there is an infinite set of possible values to begin with. Therefore, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of a continuous variable's PDF over that range, where the integral is the nonnegative area under the density function between the lowest and greatest values of the range. The PDF is nonnegative everywhere, and the area under the entire curve is equal to one, such that the probability of the random variable falling within the set of possible values is 100%.

The terms probability distribution function and probability function can also denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function (CDF), or it may be a probability mass function (PMF) rather than the density. Density function itself is also used for the probability mass function, leading to further confusion.[4] In general the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.

Example

Examples of four continuous probability density functions.

Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.

In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour?1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour?1. This quantity 2 hour?1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour?1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour?1)×(1 nanosecond) ≈ 6×10?13 (using the unit conversion 3.6×1012 nanoseconds = 1 hour).

There is a probability density function f with f(5 hours) = 2 hour?1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.

Absolutely continuous univariate distributions

A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable has density , where is a non-negative Lebesgue-integrable function, if:

Hence, if is the cumulative distribution function of , then: and (if is continuous at )

Intuitively, one can think of as being the probability of falling within the infinitesimal interval .

Formal definition

(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)

A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X?P on : the density of with respect to a reference measure on is the Radon–Nikodym derivative:

That is, f is any measurable function with the property that: or any measurable set

Discussion

In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).

It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.

Further details

Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.

The standard normal distribution has probability density

If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as

Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.

A distribution has a density function if its cumulative distribution function F(x) is absolutely continuous.[5] In this case: F is almost everywhere differentiable, and its derivative can be used as probability density:

If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets.

Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.

In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:

If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or:

It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking ?1 or 1 for values, with probability 1?2 each. The density of probability associated with this variable is:

More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is: where are the discrete values accessible to the variable and are the probabilities associated with these values.

This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability.

Families of densities

It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.

Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones.

Densities associated with multiple variables

For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is

If F(x1, ..., xn) = Pr(X1x1, ..., Xnxn) is the cumulative distribution function of the vector (X1, ..., Xn), then the joint probability density function can be computed as a partial derivative

Marginal densities

For i = 1, 2, ..., n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n ? 1 variables:

Independence

Continuous random variables X1, ..., Xn admitting a joint density are all independent from each other if

Corollary

If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable (where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by

Example

This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates (X, Y): the probability to obtain in the quarter plane of positive x and y is

Function of random variables and change of variables in the probability density function

If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator.

It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X). However, rather than computing one may find instead

The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.

Scalar to scalar

Let be a monotonic function, then the resulting density function is[6]

Here g?1 denotes the inverse function.

This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, or

For functions that are not monotonic, the probability density function for y is where n(y) is the number of solutions in x for the equation , and are these solutions.

Vector to vector

Suppose x is an n-dimensional random variable with joint density f. If y = G(x), where G is a bijective, differentiable function, then y has density pY: with the differential regarded as the Jacobian of the inverse of G(?), evaluated at y.[7]

For example, in the 2-dimensional case x = (x1, x2), suppose the transform G is given as y1 = G1(x1, x2), y2 = G2(x1, x2) with inverses x1 = G1?1(y1, y2), x2 = G2?1(y1, y2). The joint distribution for y = (y1, y2) has density[8]

Vector to scalar

Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by

This result leads to the law of the unconscious statistician:

Proof:

Let be a collapsed random variable with probability density function (i.e., a constant equal to zero). Let the random vector and the transform be defined as

It is clear that is a bijective mapping, and the Jacobian of is given by: which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that which if marginalized over leads to the desired probability density function.

Sums of independent random variables

The probability density function of the sum of two independent random variables U and V, each of which has a probability density function, is the convolution of their separate density functions:

It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1, ..., UN:

This can be derived from a two-way change of variables involving Y = U + V and Z = V, similarly to the example below for the quotient of independent random variables.

Products and quotients of independent random variables

Given two independent random variables U and V, each of which has a probability density function, the density of the product Y = UV and quotient Y = U/V can be computed by a change of variables.

Example: Quotient distribution

To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation:

Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density.

The inverse transformation is

The absolute value of the Jacobian matrix determinant of this transformation is:

Thus:

And the distribution of Y can be computed by marginalizing out Z:

This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U ? V and product UV.

Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.

Example: Quotient of two standard normals

Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions:

We transform as described above:

This leads to:

This is the density of a standard Cauchy distribution.

See also

References

  1. ^ "AP Statistics Review - Density Curves and the Normal Distributions". Archived from the original on 2 April 2015. Retrieved 16 March 2015.
  2. ^ Grinstead, Charles M.; Snell, J. Laurie (2009). "Conditional Probability - Discrete Conditional" (PDF). Grinstead & Snell's Introduction to Probability. Orange Grove Texts. ISBN 978-1616100469. Archived (PDF) from the original on 2025-08-07. Retrieved 2025-08-07.
  3. ^ "probability - Is a uniformly random number over the real line a valid distribution?". Cross Validated. Retrieved 2025-08-07.
  4. ^ Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN 0-85264-137-0 (for example, Table 5.1 and Example 5.4)
  5. ^ Scalas, Enrico (2025). Introduction to Probability Theory for Economists (PDF). self-published. p. 28. Archived (PDF) from the original on Dec 10, 2024. Retrieved July 30, 2025.
  6. ^ Siegrist, Kyle (5 May 2020). "Transformations of Random Variables". LibreTexts Statistics. Retrieved 22 December 2023.
  7. ^ Devore, Jay L.; Berk, Kenneth N. (2007). Modern Mathematical Statistics with Applications. Cengage. p. 263. ISBN 978-0-534-40473-4.
  8. ^ David, Stirzaker (2025-08-07). Elementary Probability. Cambridge University Press. ISBN 978-0521534284. OCLC 851313783.

Further reading

3岁宝宝流鼻血是什么原因 拉绿色的屎是什么原因 见到黑猫代表什么预兆 维生素d是什么东西 市监狱长是什么级别
做扩胸运动有什么好处 嘴唇发干是什么原因 口干口苦口臭是什么原因 手上蜕皮是什么原因 北京大栅栏有什么好玩的
肝结节挂什么科 支气管炎是什么原因引起的 参乌健脑胶囊适合什么人吃 颈椎退行性病变是什么意思 不等是什么意思
十二指肠溃疡是什么症状 召力念什么 穷书生是什么生肖 低压高是什么引起的 4.25是什么星座
渡劫是什么意思hcv8jop6ns9r.cn 什么是生物钟hcv8jop3ns3r.cn 身心疲惫是什么意思hcv7jop9ns8r.cn 肉蔻炖肉起什么作用naasee.com 女生流白带意味着什么hcv7jop4ns7r.cn
左手有点麻是什么原因hcv8jop6ns0r.cn 痛什么什么痛ff14chat.com 瓜尔胶是什么东西hcv8jop9ns9r.cn 男人太瘦吃什么可以长胖hcv8jop3ns6r.cn 立夏吃什么食物mmeoe.com
盐酸氟桂利嗪胶囊治什么病hcv9jop0ns5r.cn 慎重是什么意思hcv9jop5ns9r.cn 儿童手足口病吃什么药adwl56.com 卧榻是什么意思hcv9jop5ns4r.cn 相对密度是什么意思xinjiangjialails.com
乸是什么意思hcv9jop8ns0r.cn 棚改是什么意思hcv8jop3ns9r.cn 白细胞2加号什么意思hcv9jop6ns8r.cn 16年是什么年hcv9jop8ns3r.cn 闰月什么意思hcv8jop2ns8r.cn
百度