Statistics How To: Elementary Statistics for the rest of us!

0

 When you integrate √(x) + 1 along the x-axis, you’ll get the entire area on the left. But you need to find the area A on the right; In order to do that, you also have to integrate the function y = 1 then subtract the two areas.

 Solving the integral (using the power functions rule and the fact that the integral of a constant function is equal to c x. For example, the integral of f(x) = 10 is 10x). We get an area of 16/3.

 Step 5: Repeat steps 3 and 4 for the remaining shapes. For this example we only have one remaining shape (with integral bounds of 4 to 6). Integrating area B, we get 2.

 Ascertainment bias happens when the results of your study are skewed due to factors you didn’t account for, like a researcher’s knowledge of which patients are getting which treatments in clinical trials or poor Data Collection Methods that lead to non-representative samples.

 Ascertainment bias in clinical trials happens when one or more people involved in the trial know which treatment each participant is getting. This can result in patients receiving different treatments or co-treatments, which will distort the results from the trial. A patient who knows they are receiving a placebo might be less likely to report perceived benefits (the “placebo effect“).

 The effect isn’t limited to the person giving the treatment and the person receiving it: even the person writing up the results of the trial can introduce ascertainment bias if they know which people are getting which treatments. The best way to prevent this from happening is by using blinding and allocation concealment.

 Ascertainment bias can happen in experiments during data collection; it is a failure to collect a representative sample, which skews the results of your studies. For example, the sex ratio[1] for the entire world population is approximately 101 males to 100 females. Let’s say you wanted to recalculate this figure by taking a sample of 1,000 women at your women-only college and asking them how many male and female children are in their family. The result of this survey will show a heavy bias towards women, because of the simple fact that all the women have at least one female (themselves) in their family. The survey excludes any family where there are only male children. Although this is an extreme example, having uneven numbers (i.e. 400 women and 600 men) will still introduce bias into your results.

 The assumption of independence is used for T Tests, in ANOVA tests, and in several other statistical tests. It’s essential to getting results from your sample that reflect what you would find in a population. Even the smallest dependence in your data can turn into heavily biased results (which may be undetectable) if you violate this assumption.

 A dependence is a connection between your data. For example, how much you earn depends upon how many hours you work. Independence means there isn’t a connection. For example, how much you earn isn’t connected to what you ate for breakfast. The assumption of independence means that your data isn’t connected in any way (at least, in ways that you haven’t accounted for in your model).

 The observations between groups should be independent, which basically means the groups are made up of different people. You don’t want one person appearing twice in two different groups as it could skew your results.

 The observations within each group must be independent. If two or more data points in one group are connected in some way, this could also skew your data. For example, let’s say you were taking a snapshot of how many donuts people ate, and you took snapshots every morning at 9,10, and 11 a.m.. You might conclude that office workers eat 25% of their daily calories from donuts. However, you made the mistake of timing the snapshots too closely together in the morning when people were more likely to bring bags of donuts in to share (making them dependent). If you had taken your measurements at 7, noon and 4 p.m., this would probably have made your measurements independent.

 Unfortunately, looking at your data and trying to see if you have independence or not is usually difficult or impossible. The key to avoiding violating the assumption of independence is to make sure your data is independent while you are collecting it. If you aren’t an expert in your field, this can be challenging. However, you may want to look at previous research in your area and see how the data was collected.

 An autoregressive (AR) model predicts future behavior based on past behavior. It’s used for forecasting when there is some correlation between values in a time series and the values that precede and succeed them. You only use past data to model the behavior, hence the name autoregressive (the Greek prefix auto– means “self.” ). The process is basically a linear regression of the data in the current series against one or more past values in the same series.

 In an AR model, the value of the outcome variable (Y) at some point t in time is — like “regular” linear regression — directly related to the predictor variable (X). Where simple linear regression and AR models differ is that Y is dependent on X and previous values for Y.

 The AR process is an example of a stochastic process, which have degrees of uncertainty or randomness built in. The randomness means that you might be able to predict future trends pretty well with past data, but you’re never going to get 100 percent accuracy. Usually, the process gets “close enough” for it to be useful in most scenarios.

 An AR(p) model is an autoregressive model where specific lagged values of yt are used as predictor variables. Lags are where results from one time period affect following periods.

 The value for “p” is called the order. For example, an AR(1) would be a “first order autoregressive process.” The outcome variable in a first order AR process at some point in time t is related only to time periods that are one period apart (i.e. the value of the variable at t – 1). A second or third order AR process would be related to data two or three periods apart.

 An axis of rotation (also called an axis of revolution) is a line around which an object rotates. In calculus and physics, that line is usually imaginary. The radius of rotation is the length from the axis of rotation to the outer edge of the object being rotated.

 A simple example is one axle or hinge that allows rotation, but not translation (movement). The following image shows a two-dimensional shape (a half bell) rotating around a single, vertical axis of rotation. If the shape travels 360 degrees, the result is a three-dimensional bell:

Excellent Statistics

 The disc method or washer method are used to find the volume of objects of revolution in calculus. The disc method is used for solid objects, while the washer method is a modified disc method for objects with holes. More specifically:

 Basis functions (called derived features in machine learning) are building blocks for creating more complex functions. In other words, they are a set of k standard functions, combined to estimate another function—one which is difficult or impossible to model exactly.

 For example, individuals powers of x— the basis functions 1, x, x2, x3…— can be strung together to form a polynomial function. The set of basis functions used to create the more complex function is called a basis set.

 It’s possible to create many complex functions by hand; IDeally, you’ll want to work with a set of as few functions as possible. However, many real-life scenarios involve thousand of basis functions, necessitating the need for a computer.

 B-Spline basis: a set of k polynomial functions, each of a specified order d. An order is the number of constants required to define the function (Ramsay and Silverman, 2005; Ramsay et al., 2009). Popular for non-periodic data.

 Fourier basis: a set of sine functions and cosine functions: 1, sin(ωx), cos(ωx), sin(2ωx), cos(2ωx), sin(3ωx), cos(3ωx)&hekkip;. These are often used to form periodic functions. Derivatives for these functions are easy to calculate but aren’t suitable for modeling discontinuous functions (Svishcheva et al., 2015).

 The BIC is also known as the Schwarz information criterion (abrv. SIC) or the Schwarz-Bayesian information criteria. It was published in a 1978 paper by Gideon E. Schwarz, and is closely related to the Akaike information criterion (AIC) which was formally published in 1974.

 Here n is the sample size; the number of observations or number of data points you are working with. k is the number of parameters which your model estimates, and θ is the set of all parameters.

 L(θ̂) represents the likelihood of the model tested, given your data, when evaluated at maximum likelihood values of θ. You could call this the likelihood of the model given everything aligned to their most favorable.

 Comparing models with the Bayesian information criterion simply involves calculating the BIC for each model. The model with the lowest BIC is considered the best, and can be written BIC* (or SIC* if you use that name and abbreviation).

 We can also calculate the Δ BIC; the difference between a particular model and the ‘best’ model with the lowest BIC, and use it as an argument against the other model. Δ BIC is just BICmodel – BIC*, where BIC* is the best model.

 If Δ BIC is less than 2, it is considered ‘barely worth mentioning’ as an argument either for the best theory or against the alternate one. The edge it gives our best model is too small to be significant. But if Δ BIC is between 2 and 6, one can say the evidence against the other model is positive; i.e. we have a good argument in favor of our ‘best model’. If it’s between 6 and 10, the evidence for the best model and against the weaker model is strong. A Δ BIC of greater than ten means the evidence favoring our best model vs the alternate is very strong indeed.

 Suppose you have a set of data with 50 observation points, and Model 1 estimates 3 parameters. Model 2 estimates 4 parameters. Let’s say the log of your maximum likelihood for model 1 is a; and for model 2 it is 2a. Using the formula k log(n)- 2log(L(θ)):

 Since the evidence that the Bayesian Information Criterion gives us for model 1 will only be ‘worth mentioning’ if 1.7 – 2a > 2, we can only claim conclusive results if -2a > 0.3; that is to say, a < -0.15.

 Fabozzi, Focardi, Rachev & Arshanapalli. The Basics of Financial Econometrics: Tools, Concepts, and Asset Management Applications. Appendix E: Model Selection Criterion: AIC and BIC. Retrieved from http://onlinelibrary.wiley.com/store/10.1002/9781118856406.app5/asset/app5.pdf;jsessionid=A6726BA5AE1AD2A5AF007FFF78528249.f03t01?v=1&t=je8jr983&s=09eca6efc0573a238457d475d3ac909ec816a699 on March 1, 2018

 Alpha levels and beta levels are related: An alpha level is the probability of a type I error, or rejecting the null hypothesis when it is true. A beta level, usually just called beta(β), is the opposite; the probability of of accepting the null hypothesis when it’s false. You can also think of beta as the incorrect conclusion that there is no statistical significance (if there was, you would have rejected the null).

Post a Comment

0Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.
Post a Comment (0)

#buttons=(Accept !) #days=(30)

Our website uses cookies to enhance your experience. Learn More
Accept !
To Top