What do you know about statistics? Are you a business student? Are you a manager of some company? Do you want to be an Economist? If yes, this course if for you. Principles of Statistics are the basics of Economics.
Most of the times, people find such course very boring and difficult. In fact, Statistic is really a boring thing. In this book, you will witness that the whole course is detailed in an easy to read and easy to understand way. While having a cup of tea, study it and get to know all about the principles of statistics. In simple words, it is a complete course that will help you in understanding the principles of statistics.
What you’ll learn in this book:
Basics of Statistics
Population and Sample
Descriptive and Inferential Statistics
Measures of Center
Measures of Variation
Organization of Data
What is Estimation?
Summarization of Bivariate Data
Without think anymore, buy this course now!
- 1 Introduction
- 2 Chapter 1 – Basics of Statistics
- 3 Chapter 2 – Population and Sample
- 4 Chapter 3 – Descriptive and Inferential Statistics
- 5 Chapter 4 – Variables
- 6 Chapter 5 – Measures of Center & Measures of Variation
- 7 Chapter 6 – Standard deviation
- 8 Chapter 7 – Organization of the Data
- 9 Chapter 8 – Estimation and Hypothesis Testing
- 10 Chapter 9 – Summarization of Bivariate Data
- 11 Conclusion
Statistics is the scientific science included in the utilization of quantitative standards to the gathering, investigation, and presentation of numerical data. The act of statistics uses data from some population so as to depict it meaningfully, to reach determinations from it, and settle on educated choices. The population may be a group, an association, a production line, an administration counter, or a phenomenon, for example, the climate. Analysts figure out which quantitative model is right for a given sort of issue and they choose what sorts of data ought to be gathered and analyzed. Applied statistics concerns the use of the general system to specific issues.
Analysts are key patrons to exploratory procedures. They utilize their quantitative learning to the design data collection plans, handle the data, break down the data, and translate the outcomes. Further, analysts regularly make basic assessments on the unwavering quality of data and whether inferences drawn from can be made certainly. They additionally recognize the deluding misuse of data that may be depicting an off base record of a circumstance.
Statistics assumes a vital part in financial matters. Financial matters largely rely on statistics. National salary records are multipurpose pointers for the financial experts and managers. Statistical systems are utilized for preparation of these records. In financial aspects, research statistical systems are utilized for gathering and investigating the data and testing the hypothesis. The relationship in between the supply and demand is studied by statistical strategies, the imports and taxes, the expansion rate, the per capita pay are the issues which require great information of statistics. A great specialist must be snappy and exact in decision making. He realizes that what is the need of his clients, he ought to in this way, recognize what to deliver and offer and in what amounts. Statistics helps businessman to arrange generation, as indicated by the essence of the customers, the nature of the items can likewise be checked all the more productively by utilizing statistical strategies.
Theoretical statistics concerns general classes of issues and the improvement of the general system. Analysts, for the most part, create models in view of probability theory. Probability theory is the branch of science which creates models for “chance variations” or “random phenomenon.” It started as an order when mathematicians of the seventeenth century started calculating the chances in different amusements of a shot. It was soon acknowledged how to make utilizations of the hypothesis they created to the investigation of mistakes in trial measurements and to the investigation of human mortality (for instance, buy life insurance organizations). Probability theory is currently a noteworthy field with boundless applications in engineering and science.
It is advised to you that you must concentrate while studying the course; otherwise, you will not be able to completely understand the course.
Let’s start the course now!
Statistics is an exceptionally wide subject, with applications in an incomprehensible number of distinctive fields. In, for the most part, one can say that statistics is the philosophy for gathering, breaking down, deciphering and making inferences from information. Statistics is the philosophy which researchers and mathematicians have produced for translating and drawing conclusions from gathered data. Everything that deals even remotely with the accumulation, handling, elucidation and presentation of data has a place with the space of statistics, thus does the point by point planning of that goes before each of these activities.
Statistics comprises of a group of strategies for gathering and investigating data.
From above, it ought to be clear that statistics is a great deal more than simply the tabulation of numbers and the graphical presentation of these arranged numbers. Statistics is the art of picking up data from numerical and all out data. Statistical methods can be utilized to discover responses to the inquiries like:
What kind and what amount of data should be gathered?
How would it be a good idea for us to arrange and abridge the data?
How would we be able to investigate the data and reach determinations from it?
How would we be able to survey the quality of the conclusions and assess their vulnerability?
That is, statistics gives techniques to
Planning and doing examination considers
Summarizing and investigating data
Making forecasts and making speculations regarding marvels rep-hated by the data
Besides, statistics is the exploration of managing indeterminate marvel and occasions. Statistics, by and by, is connected effectively to examine the effectiveness of medicinal medications, the response of shoppers to TV advertisement, the dispositions of youngsters toward sex and marriage, and substantially more. It’s safe to say that these days’ statistics is utilized as a part of each field of science.
Population and sample are two essential ideas of statistics. The population can be portrayed as the set of individual persons or items in which an investigator is basically intrigued amid his or her exploration issue. Once in a while needed measurements for all people in the population are acquired, however regularly just a set of people of that population is observed; such set of people constitutes a sample. This gives us the accompanying meanings of population and sample.
The population is the gathering of all people or things under consideration in a statistical study.
The sample is that some portion of the population from which data is gathered.
Not each of the properties is needed to be measured from people in the population. This perception highlights the significance of a set of measurements and subsequently gives us alternative meanings of population and sample.
A (statistical) population is the set of measurements corresponding to the whole collection of units for which inferences are to be made.
A sample from the statistical population is the set of measurements that are really gathered over the span of an examination.
The population dependably speaks to the objective of an examination. We find out about the population by testing from the accumulation.
There are two noteworthy sorts of statistics. The branch of statistics dedicated to the synopsis and portrayal of data is called Descriptive Statistics and the branch of statistics concerned with utilizing sample data to make an inference around a population of data is called Inferential Statistics.
Descriptive statistics comprises of methods for sorting out and condensing data.
Inferential statistics comprises of methods for drawing and measuring the unwavering quality of conclusions about a population in view of data got from a sample of the population.
Descriptive statistics incorporates the development of graphs, diagrams, and tables, and the count of different illustrative measures, for example, midpoints, measures of variation, and percentiles.
Now and then, it is conceivable to gather the data from the entire population. In that case, it is possible to perform a descriptive study on the population and also more often than not on the sample. Just when an inference is made about the population in light of data acquired from the sample, the study get to be inferential.
Normally the elements of the population under examination can be condensed by numerical parameters. Consequently, the exploration issue normally gets to be as on examination of the values of parameters. These population parameters are unknown and Sample Statistics is utilized to make inference about them. That is, a measurement portrays a normal for the sample which can then be utilized to make inference about unknown parameters.
A parameter is an unknown numerical summary of the population. A measurement is a known numerical summary of the sample which can be utilized to make inference about parameters.
So the inference about some particular unknown parameter depends on a statistic. We utilize known sample statistics in making inferences about unknown population parameters. The sample and statistics portraying it are imperative just seeing that they give data about the unknown parameters.
A characteristic that changes from one individual or thing then onto the next is known as a variable, i.e., a variable is any characteristic that fluctuates from one individual from the population to another. Cases of variables for people are height, weight, the number of kin, sex, conjugal status, and color of eyes. The initial three of these variables yield numerical data (yield numerical measurements) and are illustrations of quantitative (or numerical) variables last three yield non-numerical data and are samples of qualitative variables.
Quantitative variables can be delegated either discrete or continuous.
A few variables, for example, the numbers of kids in a family, the numbers of accidents on the particular road on diverse days, or the numbers of students taking basics of statistics course are the result of tallying and in this manner these are discrete variables. Commonly, a discrete variable is a variable whose likely values are some or the greater part of the standard checking numbers like 0, 1, 2, 3, . . . As a definition, we can say that a variable is discrete in the event that it has just a countable number of particular conceivable values. That is, a variable is discrete on the off chance that it can accept just a limited numbers of values or the same numbers of values as there are whole numbers.
Amounts, for example, length, weight, or temperature can on a basic level be measured self-assertively precisely. Weight may be measured to the closest gram, however, it could be measured all the more precisely, say to the tenth of a gram. Such a variable, called continuous, is characteristically unique in relation to a discrete variable.
Other than being delegated either qualitative or quantitative, variables can be portrayed by a scale on which they are characterized. The size of the variable gives the certain structure to the variable furthermore characterizes the meaning of the variable.
The data of the quantitative variable can be likewise introduced by the frequency distribution. In a position of the qualitative classifications, we now list in a frequency table the clear numerical measurements that show up in the discrete data set and afterward tally their frequencies.
On the off chance that the discrete variable can have a variety of values or the quantitative variable is the consistent variable, then the data must be grouped into classes (classifications) before the table of frequencies can be framed. The fundamental strides in a procedure of collection quantitative variable into classes are:
Find the base and the most extreme values variable have in the data set
Choose intervals of equivalent length that cover the range between the minimum and the most extreme without covering. These are called class intervals, and their end focuses are called class limits.
Count the quantity of perceptions in the data that fits in with every class interval. The check in every class is the class frequency.
Calculate the relative frequencies of every class by separating the class frequency by the aggregate number of perceptions in the data.
Frequency distributions for a variable apply both to a population and to the samples from the particular population. A primary type is known as the Population Distribution of the variable, and the second type is known as a Sample Distribution. It could be said, the sample distribution is a foggy photo of the population distribution. As the sample size builds, the sample relative frequency in any class interval gets closer to the genuine population relative frequency. In this manner, the photo gets clearer, and the sample distribution looks more like the population distribution.
Presently, as the sample size expands uncertainty and the quantity of class intervals at the same time increments, with their width narrowing, the state of the sample histogram step by step approaches a smooth bend. We utilize such bends to speak to population distributions.
Descriptive measures that demonstrate where the center or the most typical value of the variable lies in gathered set of measurements are called measures of center. Measures of the center are frequently alluded to as midpoints.
The median and the mean apply just to quantitative data while the mode can be utilized with either quantitative or qualitative data.
The sample mode of a qualitative or a discrete quantitative variable is that value of the variable which happens with the greatest frequency in a data set. A correct meaning of the mode is given beneath.
The sample median of a quantitative variable is that value of the variable in a data set that partitions the set of observed values into equal parts, so that the observed values in one-half are not as much as or equivalent to the median quality and the observed values in the other half are more noteworthy or equivalent to the median worth. To acquire the median of the variable, we arrange observed values in a data set in expanding request and after that decide the center worth in the requested rundown.
The most regularly utilized measure of center for the quantitative variable is the (number juggling) sample mean. At the point when individuals talk about taking an average, it is mean that they are frequently alluding to.
The mode ought to be utilized when calculating the measure of the center for a qualitative variable. At the point when the variable is quantitative with symmetric distribution, then the mean is appropriate to measure of center.
It ought to be noticed that the sample mode, sample mean and sample median of the variable being referred to have corresponding population measures of center, i.e., we can accept that the variable being referred to have likewise the population mode, the population mean, and population median, which are all unknown. At that point the sample mode, the sample median, and the sample mean can be utilized to estimate the values of these corresponding unknown population values.
In addition to measuring the center of the observed values of the variable in the data, another vital part of an illustrative investigation of the variable is numerically measuring the degree of variation around the center. Two data sets of the same variable may display comparable positions of center, however, may be astoundingly distinctive as for variability.
Generally, as there are a few distinct measures of center, there are additionally a few unique measures of variation. Measures of variation are utilized for the most part just for quantitative variables.
The sample range is acquired by figuring the distinction between the largest observed value of the variable in a data set and the littlest one.
The sample range of the variable is entirely simple to figure. However, in utilizing the range, a lot of data is overlooked, that is, just the largest and littlest values of the variable are viewed as; the other observed values are dismissed. It ought to likewise be commented that the range can’t ever diminish, yet can increment, when extra perceptions are incorporated into the data set and that in the sense the range is excessively delicate to the sample size.
Least, most extreme and quartiles together give data on center and variation of the variable in a pleasant minimized manner. Written in increasing request, they include what is known as the five-number summary of the variable.
A boxplot depends on a 5-number summary and it can be utilized to give a graphical presentation of the center and variation of the observed values of the variable in a data set. Really, two sorts of boxplots are in like manner use boxplot and modified boxplot. The principle distinction between the two types of boxplots is that potential outliers (i.e. observed value, which don’t seem to take after the trademark distribution of whatever remains of the data) are plotted exclusively in an altered boxplot, however not in a boxplot.
The sample standard deviation is the most frequently utilized measure of variability, in spite of the fact that it is not as effectively comprehended as ranges. It can be considered as a sort of normal of the supreme deviations of observed values from the mean of the variable being referred to.
The more variation there is in the observed values, the larger is the standard deviation for the variable being referred to.
Inferential statistical techniques use sample data to make forecasts about the values of helpful summary depictions, called parameters, of the population of interest. This is artificial since parameter values are typically unknown or we would not require inferential systems. In the event that the data are conflicting with the standard parameter values, then we surmise that the real parameter values are to some degree distinctive.
We first characterize the term probability, utilizing a relative frequency approach. Envision a theoretical test comprising of a long succession of repeated perceptions on some random marvel. Every perception could conceivably bring about some specific result. The probability of that result is characterized to be the relative frequency of its occurrence, over the long term.
A simplified representation of such a trial is a long succession of flips of a coin, the result of enthusiasm being that a head confronts upwards. Any on flip could conceivably bring about a head. In the event that the coin is adjusted, then a fundamental result in probability, called a law of large numbers, suggests that the extent of flips bringing about a head inclines toward 1/2 as the quantity of flips increments.
More often than not we are managing variables which have numerical outcomes. A variable which can take no less than two distinctive numerical values in a long term running of rehashed perceptions is called Random Variable.
The classes into which a qualitative variable falls could conceivably have a natural ordering. For instance, word related classes have no common requesting. On the off chance that the classes of a qualitative variable are unordered, then the qualitative variable is said to be characterized on a Nominal Scale, the word nominal alluding to the way that the classifications are just named. On the off chance that the classifications can be placed all together, the scale is called an Ordinal Scale.
The Quantitative variables, whether discrete or continuous, are characterized either on an interval scale or on a ratio scale. On the off chance that one can look at the contrasts between measurements of the variable meaningfully, yet not the ratio of the measurements, then the quantitative variable is characterized on the interval scale. Keeping in mind the end goal to the ratio of the measurements being meaningful, the variable must have common meaningful total zero points. For instance, a temperature measured on the Centigrade system is an interval variable and the tallness of individual is a ratio variable.
Watching the values of the variables, for one or more individuals or things, yield data. Each individual piece of data is called a perception and the collection of all perceptions for specific variables is known as a data set or data matrix. The dataset is the values of variables recorded for an arrangement of examining units.
For simplicity in controlling (recording and sorting) the values of the qualitative variable, they are frequently coded by allocating numbers to the distinctive categories, and in this way changing the absolute data to numerical data in an insignificant sense.
Data is displayed in a matrix structure (data matrix). Each of the values of the specific variable is sorted out to the same column; the values of variable structure the column in a data matrix. Perception, i.e. measurements gathered from the inspecting unit, shapes a row in a data matrix.
Statistical inference reaches determinations about the population on the premise of data. The data are condensed by statistics, for example, the sample mean and the sample standard deviation. At the point when the data are created by random sampling or randomized experimentation, a measurement is a random variable that complies with the laws of probability hypothesis. The connection in between the probability and data is framed by the examining distributions of statistics. A sampling distribution indicates how a measurement would differ in repeated data production.
Each measurement has a sampling distribution. A sampling distribution is just a kind of probability distribution. Not at all like the distributions concentrated in this way, a sampling distribution alludes not to individual perceptions but rather to the values of measurement figured from those perceptions, in sample after sample.
Sample distribution reflects the sampling variability that happens in gathering data and utilizing sample statistics to estimate parameters. A sampling distribution of statistics based on n perceptions is the probability distribution for that measurement coming about because of repeatedly taking samples of size n, every time calculating the statistic value. The type of sampling distribution is regularly known theoretically. We can then put forth probabilistic expressions about the estimation of measurement for one sample of some settled size n.
Statistical inference uses sample data to shape two sorts of estimators of parameters. A point estimate comprises of a single number, computed from the data that is the best single supposition for the unknown parameter. An interval estimate comprises of a range of numbers around the point estimate, inside which the parameter is accepted to fall.
For point estimation, a solitary number lies in the forefront even despite the fact that a standard error is attached. Rather, it is regularly more alluring to deliver an interval of values that is liable to contain the genuine estimation of the unknown parameter.
A Confidence Interval Estimate of a parameter comprises of an interval of numbers acquired from a point estimate of the parameter together with a rate that indicates how certain we are that the parameter lies in the interval. The confidence rate is known as the Confidence Level.
A common aim in numerous studies is to check whether the data concur with specific expectations. These expectations are hypotheses about variables measured in the study.
A hypothesis is an announcement about some characteristic of a variable or a gathering of variables.
Hypotheses emerge from the hypothesis that drives the examination. At the point when a hypothesis identifies with qualities of a population, for example, population parameters, one can utilize statistical systems with sample data to test its legitimacy.
A significance test is a method for statistically testing a hypothesis by comparing the data to values anticipated by the hypothesis. Data that fall a long way from the anticipated values give proof against the hypothesis. All significance tests have five components: suspicions, hypotheses, test statistics, p-value, and conclusion.
All significance tests require certain suppositions for the tests to be legitimate. These suspicions allude, e.g., to the kind of data, the type of the population distribution, technique for examining, and sample size.
The significance test considers 2 hypotheses about the value of a population parameter: the null hypothesis and the alternative hypothesis.
The Null Hypothesis H0 is the hypothesis that is straightforwardly tested. This is normally an announcement that the parameter has worth corresponding to, in some sense, no impact.
The Alternative Hypothesis Ha is one which contradicts null hypothesis. This hypothesis expresses that the parameter falls in some alternative arrangement of values to what null hypothesis indicates.
A significance test examines the quality of sample proof against the null hypothesis. The test is led to explore whether the data contradict the null hypothesis, subsequently proposing that the alternative hypothesis is genuine. The alternative hypothesis is judged acceptable if the sample data are conflicting with the null hypothesis. The hypotheses are defined before gathering or examining the data.
The test statistics is a measurement computed from the sample data to test the null hypothesis. This measurement ordinarily includes a point estimate of the parameter to which the hypotheses allude.
Utilizing the sample distribution of the test measurement, we compute the probability that values of the measurement like one observed would happen if the null hypothesis were valid. This gives a measure of how surprising the observed test measurement quality is contrasted with what H0 predicts. That is, we consider the arrangement of conceivable test measurement values that give at any rate as much confirmation against the null hypothesis as the observed test measurement.
This set is framed with reference to the alternative hypothesis: the values giving more grounded proof against the null hypothesis are those giving more grounded confirmation for the alternative hypothesis. The p-value is the probability, if H0 were genuine, that the test measurement would fall in this accumulation of values.
The observed values of the two variables being referred to, bivariate data, may be qualitative or quantitative in nature. That is, both variables may be either qualitative or quantitative. We inspect each of these potential outcomes.
Bivariate qualitative data result from the observed values of the two qualitative variables.
In a two-manner frequency table, the classes (or classifications) for one variable (called row variable) are stamped along the left edge, those for the other (called column variable) along the upper edge and the frequency counts recorded the cells. Summary of bivariate data by two-way frequency table is known as a cross-tabulation or cross-classification of observed values. In statistical wording two-way frequency tables are additionally called as contingency tables.
The least difficult frequency table is 2 × 2 frequency table, where every variable has just two classes. Comparable path, there may be 2 × 3 tables, 3 × 3 tables, and so forth, where the first number tells the measure of rows the table has and the second number measure of columns.
The conditional distributions are the methods for discovering whether there is the association between column and row variables or not. On the off chance that the row percentages are clearly distinctive in every row, then the conditional distributions of the column variable are changing in every row and we can decipher that there is the association between variables, i.e., the value of the row variable influences the estimation of the column variable. Again completely also, if the column rates are clearly diverse in every column, then the conditional distributions of the row variable are changing in every column and we can interpret that there is the association between variables, i.e., a value of the column variable influences the value of the row variable.
The direction of the association relies on upon the states of conditional distributions. On the off chance that low rates (or the column rates) are really comparable from row to row (or from column to column), then there is no association in between variables and we say that the variables are independent.
Whether to utilize the row and column rates for the inference of conceivable association relies on upon which variable is the reaction variable and which one explanatory variable.
A response variable measures a result of a study. An explanatory variable endeavors to explain the observed outcomes.
By and large, it is not by any means conceivable to recognize which variable is the reaction variable and which one explanatory variable. All things considered, we can utilize either row or column rates to discover whether there is the association between variables or not. In the event that we now figure out that there is the association between variables, we can’t say that one variable is bringing on changes in other variable, i.e., association does not suggest causation.
Then again, on the off chance that we can recognize that the row variable is the reaction variable and the column variable is the explanatory variable, then conditional distributions of the row variable for the diverse classes of the column variable ought to be contrasted all together in order to find out whether there are association and causation between the variables. Also, in the event that we can recognize that the column variable is the reaction variable and the row variable is the explanatory variable, and then conditional distributions of the column variable ought to be looked at. In any case, particularly in the event of two qualitative variables, we need to exceptionally watchful about whether the association does truly mean that there is likewise causation between variables.
The qualitative bivariate data are best displayed graphically either by the clustered or stacked bar graphs. Likewise, pie chart partitioned for distinctive classes of one variable (called plotted pie graph) can be enlightening.
For a situation of one variable being qualitative and the other quantitative, we can at present utilize a two-way frequency table to figure out whether there is the association between the variables or not. The inference is then taking into account the conditional distributions computed from the two-way frequency table.
For the most part, if there should arise an occurrence of one variable being qualitative and the other quantitative, we are occupied with how the quantitative variable is appropriated in distinctive classes of the qualitative variable. By examining conditional distributions thusly, we accept that the quantitative variable is the reaction variable and qualitative the explanatory variable.
At the point when the reaction variable is quantitative and the explanatory variable is qualitative, the correlation of the conditional distributions of the quantitative variable must be founded on some particular measures that describe the conditional distributions. We know from past chapters that measures of center and measures of variation can be utilized to describe the distribution of the variable being referred to. Also, we can portray the conditional distributions by calculating conditional measures of center and conditional measures of variation from the observed values of the reaction variable if there should be an occurrence of the explanatory variable has a particular quality.
These conditional measures of center and variation can now be utilized to discover whether there is the association (and causation) between variables or not. For instance, if the values of conditional means of the quantitative variable very obviously in every class of the qualitative variable, then we can interpret that there is the association between the variables.
At the point when both variables are quantitative, the strategies exhibited above can obviously be connected for recognition of conceivable association of the variables. Both variables can first be grouped and afterward joint distribution can be displayed by two-way frequency table. Additionally, it is conceivable gathering only one of the variables and after that look at conditional measures of center and variation of the other variable keeping in mind the end goal to discover conceivable association.
Yet, when both variables are quantitative, an ideal route, graphically, to see a relationship of the variables is to develop a scatterplot. Development of scatterplots and value of correlation coefficients are concentrated all the more clearly in the following segment.
The best approach to show the connection between two quantitative variables is a scatterplot. The values of one variable show up on the level hub, and the values of the other variable show up on the vertical hub. Every person in the data shows up as the point in the plot altered by the values of both variables for that person. Continuously plot the explanatory variable, if there is one, on the even pivot (the x hub) of a scatterplot. As a reminder, we more often than not call the explanatory variable x and the reaction variable y. On the off chance that there is no explanatory response qualification, either variable can go on the flat hub.
The important types of the relationships between variables are linear relationships, where the focuses in the plot demonstrate a straight-line design. Curved Relationships and clusters are different structures to look for.
The strength of a relationship is determined by how close the points in the scatterplot lay to a simple form such a line.
The scatterplot gives a visual impression of the nature of the relation between the x and y values in a bivariate data set. In many cases, the points seem to band around the straight line.
Two variables may have a high correlation without being causally related. Correlation disregards the qualification between explanatory and response variables and just measures the quality of a linear association between two variables.
The sample correlation coefficient is likewise called as Pearson Correlation Coefficient. As it is clear now that Pearson correlation coefficient can be computed just when both variables are quantitative, i.e., characterized at least on an interval scale.
Ideally, the course above has assisted you with understanding a little better what the statistics mean. Then again, you may even now be thinking “Why do I have to learn statistics?” or “What future advantage would I be able to get from a statistics course?” There are many reasons to study statistics.
The principal reason is to have the capacity to viably lead research. Without the utilization of statistics, it would be extremely hard to settle on decisions taking into account the data gathered from a research project.
Presently a number of students might be saying to themselves: “Yet I never anticipate doing any research.” While you might never plan to be included in the research, it may discover its way into your life.
The other reason to study statistics is to have the capacity to read journals. Most specialized journals you will read contain some type of statistics. More often than not, you will discover them in something known as the results section. Without the knowledge of statistics, the data contained in this section will be meaningless. The knowledge of basic statistics will assist you with the basic abilities that are must to read and assess the most results sections.
Another reason is to further create basic and analytic thinking skills. The study of statistics will serve to upgrade and further build up basic and analytic thinking skills. To do well in statistics, one must create and utilize formal logical thinking abilities that are both imaginative and high level.
The last but not least motivation to study statistics is to be an educated consumer. Like some other tool, statistics can be used or misused. Yes, beyond any doubt a few people do effectively lie and deceive with statistics. All the more frequently, on the other hand, good natured people unintentionally report erroneous statistical conclusions. In the event that you know the principles of statistics, you will be in a superior position to assess the data you have been given.
Please do review the course with your important feedbacks.
Thanks a lot for reading, and I wish you very best of luck for your future life.