打嗝用什么药| 什么是命| 头部出汗多吃什么药| 养猫有什么好处| 肾结水有什么危害| 拉肚子恶心想吐吃什么药| 幽门螺杆菌阳性什么意思| 文曲星下凡是什么意思| 过敏期间不能吃什么东西| 白天不咳嗽晚上咳嗽吃什么药| 肺炎是什么原因引起的| 铜陵有什么好玩的地方| 女生吃什么能淡化胡子| ml什么单位| 85年是什么年| 呼吸不顺畅是什么原因| 催乳素高是什么原因| 乳酸脱氢酶高是什么原因| 拉肚子出血是什么原因| 指南针为什么不叫指北针| 六月十八是什么星座| 日出东方下一句是什么| 大便青黑色是什么原因| 女人肺气虚吃什么补最快| 11月10号是什么星座| 男人小便刺痛吃什么药| 关节痛吃什么药| 就寝什么意思| 肝不好吃什么好| 海笋是什么东西| joeone是什么牌子| 过敏嘴唇肿是什么原因| 白癜风的症状是什么| 什么是生辰八字| 青春痘用什么药膏擦最好呢| 展望未来什么意思| 什么水果是发物| 左眼跳财是什么意思| 什么是包皮过长| 劫煞是什么意思| 长白班什么意思| 青枝骨折属于什么骨折| 吃什么降羊水最快| 特应性皮炎是什么意思| 脱毛膏的原理是什么| 怀孕腿抽筋是因为什么原因引起的| 湿气重是什么原因| 脚踝疼是什么原因| 甘草泡水喝有什么好处和坏处| 三月十二是什么星座| 蚝油是干什么用的| 肋骨神经痛吃什么药| 爱豆是什么意思| 狗头军师什么意思| 心脏检查挂什么科| 一月14号是什么星座| 部队政委是什么级别| 蜻蜓点水是什么行为| 风调雨顺的下联是什么| 白细胞酯酶弱阳性什么意思| 什么是清关| 脉搏细是什么原因| 眼皮跳是什么预兆| 牙齿痛吃什么药最管用| 煮粥用什么米| 葡萄球菌是什么| 子宫内膜息肉吃什么药| 什么牌子的冰箱好| 什么时候恢复的高考| 心慌心悸吃什么药| 什么是gmp| 阴道发白是什么原因| 独占鳌头是什么意思| 7月5日是什么日子| plt是什么意思| 吃什么能快速减肥| 脚上长鸡眼是什么原因| 开店需要什么手续| 淋巴发炎吃什么药| 艾滋病是一种什么病| 什么猪没有嘴| 洋葱不能跟什么一起吃| 心肌桥是什么病| 乳铁蛋白是什么| 播客是什么意思| 什么的香蕉| 什么血型的人招蚊子| 碧螺春是什么茶| 夏天喝什么茶比较好| 病入膏肓什么意思| 什么人容易怀葡萄胎| 肚子疼想吐是什么原因| 1988年属什么| 黑色阔腿裤搭配什么上衣好看| 腰间盘膨出和突出有什么区别| 仰卧起坐是什么现象| 吃芒果有什么坏处| 呵呵是什么意思啊| 蚯蚓是什么动物| 脂肪酶高是什么原因| 什么样的小手| 吃什么水果补铁| 子宫内膜回声欠均匀什么意思| adhd是什么| 胆固醇为什么会高| 古来稀是什么意思| 广州番禺有什么好玩的地方| 三点水加盆读什么| 痤疮用什么药| ca199是什么意思| 棺材一般用什么木头| 风寒吃什么感冒药| 什么叫脂溢性脱发| 青葱岁月是什么意思| 疏肝理气吃什么药| 治疗幽门螺旋杆菌的四联药是什么| 奇变偶不变是什么意思| 维生素b族什么时候吃最好| 反馈是什么意思| 皮试是什么| 吃什么东西对肾好| 细胞角蛋白19片段是什么意思| 常吃黑芝麻有什么好处和坏处| 高血压吃什么好降压快| 9.3是什么日子| 玻璃属于什么垃圾| 疱疹用什么药好| 老流口水是什么原因| 什么晚霜比较好用| 性生活过后出血是什么原因| 燥湿是什么意思| 北上广深是什么意思| 整编师和师有什么区别| 佞臣什么意思| 女儿的孩子叫什么| tf口红什么牌子| 小腿发胀是什么原因| 后背疼应该挂什么科| 三四月份是什么星座| 半夜胎动频繁是什么原因| 笑面虎比喻什么样的人| 什么茶可以减肥消脂| 宫内孕和宫外孕有什么区别| 抗战纪念日为什么是9月3日| 排便困难用什么药| 昙花一现是什么意思| 脚板心发热是什么原因| 梦见考试是什么预兆| 皮肤黑穿什么颜色的衣服显白| 约会什么意思| 刚产下的蚕卵是什么颜色| 风湿病吃什么药| 拉开帷幕是什么意思| 轻度贫血有什么症状| 怀孕10多天有什么症状| 灰白组织是什么意思| 靠谱什么意思| 梦见数钱是什么预兆| 微商是什么| 九五年属什么| 痛风能吃什么鱼| 净字五行属什么| ng是什么意思| 不想怀孕有什么办法| 逾越节是什么意思| 胃出血大便是什么颜色| 怀孕不到一个月有什么症状| aivei是什么品牌| 通奸是什么意思| 啃老是什么意思| 什么是原发性高血压和继发性高血压| 早起嘴巴苦什么原因| hpv感染是什么症状| 家里养泥鳅喂什么东西| 胃疼挂什么科室| yq是什么意思| 肺部ct挂什么科| 螃蟹过街的歇后语是什么| 什么水果对嗓子好| 胃镜后吃什么| 手抖是因为什么| 什么水果下火| 好哒是什么意思| 人参果是什么季节的| 什么叫骨质增生| 脾湿吃什么药| 匚读什么| 什么叫脑白质病变| 头痛去医院挂什么科| 肾结石少吃什么食物| aml是什么病| 敖是什么意思| 加德纳菌阳性是什么意思| 尿道口红肿用什么药| 三院是什么医院| 为什么乳头会变硬| 无偿献血证有什么用| 名媛是什么意思| 肾的作用和功能是什么| 什么的曲线| 小腿胀痛是什么原因| 书是什么排比句| 什么动物眼睛是红色的| 职称是什么| 油嘴滑舌指什么生肖| 大脚趾外翻是什么原因| 荨麻疹要用什么药| 孩子不说话挂什么科| 飞黄腾达是什么生肖| 月经提前是什么原因| 老是打哈欠是什么原因| 肾结水是什么原因造成的| 屁股疼痛是什么原因引起的| 么么叽是什么意思| 铁蛋白偏高是什么原因| 树脂是什么材料| miles是什么意思| 畏寒是什么意思| 错过是什么意思| 银针茶属于什么茶| 乐果是什么农药| 手冲是什么| 12月3号是什么星座| 医保统筹是什么意思| 耳后淋巴结肿大吃什么消炎药| 油光满面是什么意思| 五行木生什么| 外阴起红点是什么病| 脂膜炎是什么病| 蛋黄吃多了有什么坏处| 甘油三酯高应该注意什么| 1984年什么命| 玫瑰花和什么一起泡水喝好| 双子座和什么星座最不配| 耳鼻喉科属于什么科| 盛夏是什么时候| 免疫抑制剂是什么意思| 血小板分布宽度是什么意思| 陈赫是什么星座的| 男性早泄吃什么药| 讳疾忌医是什么意思| nt是什么| 低gi食物是什么意思| 什么情况下打破伤风| 孕妇吃什么利尿排羊水| 尿沉渣检查什么| 猫睡在枕头旁说明什么| 胃寒吃什么| 阴道炎挂什么科| 兽医是什么专业| 4月29号是什么星座| 唯我独尊是什么生肖| 年柱将星是什么意思| 皮肤黑的人适合穿什么颜色的衣服| 款款是什么意思| 一日之计在于晨是什么生肖| 四月十六是什么星座| 为什么经常刷牙还牙黄| 包饺子用什么面粉| 血管狭窄吃什么食物好| 白果是什么| 打喷嚏流清鼻涕属于什么感冒| rag什么意思| 卧底归来大结局是什么| 百度Jump to content

平潭综合实验区旅游发展委员会(闽ICP备15027594号-1)

From Wikipedia, the free encyclopedia
百度 年轻留给自己,年老推与国家老了,海里游不动了,再回单位养老,是谓给自己留后路。

Design of experiments with full factorial design (left), response surface with second-degree polynomial (right)

The design of experiments (DOE),[1] also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.

Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.

Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework.[2] Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.

History

[edit]

Statistical experiments, following Charles S. Peirce

[edit]

A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878)[3] and "A Theory of Probable Inference" (1883),[4] two publications that emphasized the importance of randomization-based inference in statistics.[5]

Randomized experiments

[edit]

Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.[6][7][8][9] Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.[6][7][8][9]

Optimal designs for regression models

[edit]

Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876.[10] A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).[11][12]

Sequences of experiments

[edit]

The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered[13] by Abraham Wald in the context of sequential tests of statistical hypotheses.[14] Herman Chernoff wrote an overview of optimal sequential designs,[15] while adaptive designs have been surveyed by S. Zacks.[16] One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.[17]

Fisher's principles

[edit]

A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.[18]

Comparison
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Randomization
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment".[19] There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Statistical replication
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic.[20] However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.[21]
Blocking
Blocking (right)
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality
Example of orthogonal factorial design
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T ? 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Multifactorial experiments
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.

Example

[edit]

This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates.[22][23][15] The experiments designed in this example involve combinatorial designs.[24]

Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by

We consider two different experiments:

  1. Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8.
  2. Do the eight weighings according to the following schedule—a weighing matrix:
Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is
Similar estimates can be found for the weights of the other items:

The question of design of experiments is: which experiment is better?

The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.

Many problems of the design of experiments involve combinatorial designs, as in this example and others.[24]

Avoiding false positives

[edit]

False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.[25]

Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.[26]

Experimental designs with undisclosed degrees of freedom[jargon] are a problem,[27] in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.[28][29]

P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.[30][31]

Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.[26]

Clear and complete documentation of the experimental methodology is also important in order to support replication of results.[32]

Discussion topics when setting up an experimental design

[edit]

An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment.[33] An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:

  1. How many factors does the design have, and are the levels of these factors fixed or random?
  2. Are control conditions needed, and what should they be?
  3. Manipulation checks: did the manipulation really work?
  4. What are the background variables?
  5. What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power?
  6. What is the relevance of interactions between factors?
  7. What is the influence of delayed effects of substantive factors on outcomes?
  8. How do response shifts affect self-report measures?
  9. How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
  10. What about using a proxy pretest?
  11. Are there confounding variables?
  12. Should the client/patient, researcher or even the analyst of the data be blind to conditions?
  13. What is the feasibility of subsequent application of different conditions to the same units?
  14. How many of each control and noise factors should be taken into account?

The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.

Causal attributions

[edit]

In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design.

Statistical control

[edit]

It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.[34] To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.

One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.

Experimental designs after Fisher

[edit]

Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.

In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.

Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.

As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.

Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.[35]

The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners.[36][37][38][39][40] Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification. [41][42]

Human participant constraints

[edit]

Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments.[43] In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans.[44] Balancing the constraints are views from the medical field.[45] Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393)

See also

[edit]

References

[edit]
  1. ^ "What Is Design of Experiments (DOE)?". asq.org. American Society for Quality. Retrieved 20 February 2025.
  2. ^ "The Sequential Nature of Classical Design of Experiments | Prism". prismtc.co.uk. Retrieved 10 March 2023.
  3. ^ Peirce, Charles Sanders (1887). "Illustrations of the Logic of Science". Open Court (10 June 2014). ISBN 0812698495.
  4. ^ Peirce, Charles Sanders (1883). "A Theory of Probable Inference". In C. S. Peirce (Ed.), Studies in logic by members of the Johns Hopkins University (p. 126–181). Little, Brown and Co (1883)
  5. ^ Stigler, Stephen M. (1978). "Mathematical statistics in the early States". Annals of Statistics. 6 (2): 239–65 [248]. doi:10.1214/aos/1176344123. JSTOR 2958876. MR 0483118. Indeed, Pierce's work contains one of the earliest explicit endorsements of mathematical randomization as a basis for inference of which I am aware (Peirce, 1957, pages 216–219
  6. ^ a b Peirce, Charles Sanders; Jastrow, Joseph (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83.
  7. ^ a b of Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis. 79 (3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489. S2CID 52201011.
  8. ^ a b Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032. JSTOR 1085417. S2CID 143685203.
  9. ^ a b Trudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis. 88 (4): 653–673. doi:10.1086/383850. PMID 9519574. S2CID 23526321.
  10. ^ Peirce, C. S. (1876). "Note on the Theory of the Economy of Research". Coast Survey Report: 197–201., actually published 1879, NOAA PDF Eprint Archived 2 March 2017 at the Wayback Machine.
    Reprinted in Collected Papers 7, paragraphs 139–157, also in Writings 4, pp. 72–78, and in Peirce, C. S. (July–August 1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643. JSTOR 168276.
  11. ^ Guttorp, P.; Lindgren, G. (2009). "Karl Pearson and the Scandinavian school of statistics". International Statistical Review. 77: 64. CiteSeerX 10.1.1.368.8328. doi:10.1111/j.1751-5823.2009.00069.x. S2CID 121294724.
  12. ^ Smith, Kirstine (1918). "On the standard deviations of adjusted and interpolated values of an observed polynomial function and its constants and the guidance they give towards a proper choice of the distribution of observations". Biometrika. 12 (1–2): 1–85. doi:10.1093/biomet/12.1-2.1.
  13. ^ Johnson, N.L. (1961). "Sequential analysis: a survey." Journal of the Royal Statistical Society, Series A. Vol. 124 (3), 372–411. (pages 375–376)
  14. ^ Wald, A. (1945) "Sequential Tests of Statistical Hypotheses", Annals of Mathematical Statistics, 16 (2), 117–186.
  15. ^ a b Herman Chernoff, Sequential Analysis and Optimal Design, SIAM Monograph, 1972.
  16. ^ Zacks, S. (1996) "Adaptive Designs for Parametric Models". In: Ghosh, S. and Rao, C. R., (Eds) (1996). "Design and Analysis of Experiments," Handbook of Statistics, Volume 13. North-Holland. ISBN 0-444-82061-2. (pages 151–180)
  17. ^ Robbins, H. (1952). "Some Aspects of the Sequential Design of Experiments". Bulletin of the American Mathematical Society. 58 (5): 527–535. doi:10.1090/S0002-9904-1952-09620-8.
  18. ^ Miller, Geoffrey (2000). The Mating Mind: how sexual choice shaped the evolution of human nature, London: Heineman, ISBN 0-434-00741-2 (also Doubleday, ISBN 0-385-49516-1) "To biologists, he was an architect of the 'modern synthesis' that used mathematical models to integrate Mendelian genetics with Darwin's selection theories. To psychologists, Fisher was the inventor of various statistical tests that are still supposed to be used whenever possible in psychology journals. To farmers, Fisher was the founder of experimental agricultural research, saving millions from starvation through rational crop breeding programs." p.54.
  19. ^ Creswell, J.W. (2008), Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd edition), Upper Saddle River, NJ: Prentice Hall. 2008, p. 300. ISBN 0-13-613550-1
  20. ^ Dr. Hani (2009). "Replication study". Archived from the original on 2 June 2012. Retrieved 27 October 2011.
  21. ^ Burman, Leonard E.; Robert W. Reed; James Alm (2010), "A call for replication studies", Public Finance Review, 38 (6): 787–793, doi:10.1177/1091142110385210, S2CID 27838472, retrieved 27 October 2011
  22. ^ Hotelling, Harold (1944). "Some Improvements in Weighing and Other Experimental Techniques". Annals of Mathematical Statistics. 15 (3): 297–306. doi:10.1214/aoms/1177731236.
  23. ^ Giri, Narayan C.; Das, M. N. (1979). Design and Analysis of Experiments. New York, N.Y: Wiley. pp. 350–359. ISBN 9780852269145.
  24. ^ a b Jack Sifri (8 December 2014). "How to Use Design of Experiments to Create Robust Designs With High Yield". youtube.com. Retrieved 11 February 2015.
  25. ^ Forstmeier, Wolfgang; Wagenmakers, Eric-Jan; Parker, Timothy H. (23 November 2016). "Detecting and avoiding likely false-positive findings – a practical guide". Biological Reviews. 92 (4): 1941–1968. doi:10.1111/brv.12315. hdl:11245.1/31f84a5b-4439-4a4c-a690-6e98354199f5. ISSN 1464-7931. PMID 27879038. S2CID 26793416.
  26. ^ a b David, Sharoon; Khandhar1, Paras B. (17 July 2023). "Double-Blind Study". StatPearls Publishing. PMID 31536248.{{cite journal}}: CS1 maint: numeric names: authors list (link)
  27. ^ Simmons, Joseph; Leif Nelson; Uri Simonsohn (November 2011). "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant". Psychological Science. 22 (11): 1359–1366. doi:10.1177/0956797611417632. ISSN 0956-7976. PMID 22006061.
  28. ^ "Science, Trust And Psychology in Crisis". KPLU. 2 June 2014. Archived from the original on 14 July 2014. Retrieved 12 June 2014.
  29. ^ "Why Statistically Significant Studies Can Be Insignificant". Pacific Standard. 4 June 2014. Retrieved 12 June 2014.
  30. ^ Nosek, Brian A.; Ebersole, Charles R.; DeHaven, Alexander C.; Mellor, David T. (13 March 2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. Bibcode:2018PNAS..115.2600N. doi:10.1073/pnas.1708274114. ISSN 0027-8424. PMC 5856500. PMID 29531091.
  31. ^ "Pre-Registering Studies – What Is It, How Do You Do It, and Why?". www.acf.hhs.gov. Archived from the original on 29 August 2022. Retrieved 29 August 2023.
  32. ^ Chris Chambers (10 June 2014). "Physics envy: Do 'hard' sciences hold the solution to the replication crisis in psychology?". theguardian.com. Retrieved 12 June 2014.
  33. ^ Ader, Mellenberg & Hand (2008) "Advising on Research Methods: A consultant's companion"
  34. ^ Bisgaard, S (2008) "Must a Process be in Statistical Control before Conducting Designed Experiments?", Quality Engineering, ASQ, 20 (2), pp 143–176
  35. ^ Giri, Narayan C.; Das, M. N. (1979). Design and Analysis of Experiments. New York, N.Y: Wiley. pp. 53, 159, 264. ISBN 9780852269145.
  36. ^ Montgomery, Douglas (2013). Design and analysis of experiments (8th ed.). Hoboken, NJ: John Wiley & Sons, Inc. ISBN 9781118146927.
  37. ^ Walpole, Ronald E.; Myers, Raymond H.; Myers, Sharon L.; Ye, Keying (2007). Probability & statistics for engineers & scientists (8 ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 978-0131877115.
  38. ^ Myers, Raymond H.; Montgomery, Douglas C.; Vining, G. Geoffrey; Robinson, Timothy J. (2010). Generalized linear models : with applications in engineering and the sciences (2 ed.). Hoboken, N.J.: Wiley. ISBN 978-0470454633.
  39. ^ Box, George E.P.; Hunter, William G.; Hunter, J. Stuart (1978). Statistics for Experimenters : An Introduction to Design, Data Analysis, and Model Building. New York: Wiley. ISBN 978-0-471-09315-2.
  40. ^ Box, George E.P.; Hunter, William G.; Hunter, J. Stuart (2005). Statistics for Experimenters : Design, Innovation, and Discovery (2 ed.). Hoboken, N.J.: Wiley. ISBN 978-0471718130.
  41. ^ Spall, J. C. (2010). "Factorial Design for Efficient Experimentation: Generating Informative Data for System Identification". IEEE Control Systems Magazine. 30 (5): 38–53. doi:10.1109/MCS.2010.937677. S2CID 45813198.
  42. ^ Pronzato, L (2008). "Optimal experimental design and some related control problems". Automatica. 44 (2): 303–325. arXiv:0802.4381. doi:10.1016/j.automatica.2007.05.016. S2CID 1268930.
  43. ^ Moore, David S.; Notz, William I. (2006). Statistics : concepts and controversies (6th ed.). New York: W.H. Freeman. pp. Chapter 7: Data ethics. ISBN 9780716786368.
  44. ^ Ottoboni, M. Alice (1991). The dose makes the poison : a plain-language guide to toxicology (2nd ed.). New York, N.Y: Van Nostrand Reinhold. ISBN 978-0442006600.
  45. ^ Glantz, Stanton A. (1992). Primer of biostatistics (3rd ed.). ISBN 978-0-07-023511-3.

Sources

[edit]
  • Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12–13. Relevant individual papers:
    • (1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint.
    • (1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint.
    • (1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint.
    • (1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint.
    • (1883), "A Theory of Probable Inference", Studies in Logic, pp. 126–181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, ISBN 90-272-3271-7)
[edit]
大将军衔相当于什么官 妇科千金片主要治什么 金鱼沉底不动什么原因 痔疮挂什么科 男人右眼跳是什么预兆
重庆市长什么级别 亥时右眼跳是什么预兆 胃炎吃什么药好使 姑息治疗是什么意思 目鱼和墨鱼有什么区别
忙碌的动物是什么生肖 梦见磕头下跪什么意思 肝功能看什么科室 k是什么单位 儿童c反应蛋白高说明什么
吃蒜有什么好处 生命的尽头是什么 傲娇是什么意思 偏安一隅是什么意思 吃猪肺有什么好处和坏处
ts是什么hcv8jop1ns3r.cn 脸上长痘痘用什么药膏效果好hcv8jop8ns4r.cn 痛风什么引起的原因有哪些hcv8jop8ns7r.cn 肚子肥胖是什么原因引起的hcv8jop6ns8r.cn 乌龟能吃什么水果yanzhenzixun.com
子宫前位和子宫后位有什么区别bjcbxg.com 吃力不讨好是什么意思hcv8jop3ns1r.cn 公报私仇是什么生肖hcv8jop9ns9r.cn 1970年属什么生肖hcv7jop7ns3r.cn 殿后和垫后有什么区别hcv8jop8ns3r.cn
缺陷的陷是什么意思hcv8jop5ns8r.cn 脚心发凉是什么原因hcv8jop2ns2r.cn 小鬼是什么意思hcv8jop1ns3r.cn 中国什么姓氏人口最多hcv9jop1ns9r.cn 血红蛋白偏低吃什么补hcv8jop2ns7r.cn
老虎头是什么牌子衣服hcv9jop6ns3r.cn 女方起诉离婚需要什么证件0297y7.com 痔疮吃什么消炎药好得快hcv7jop7ns1r.cn 碟鱼是什么鱼hcv8jop4ns3r.cn 属蛇和什么属相相冲hcv8jop7ns9r.cn
百度