society with passing time. We are constantly being bombarded

with charts, graphs, and statistics of various types in an

attempt to provide us with succinct information to make decisions.

Sometimes this information is presented in a manner so as to sway

us toward a particular view. As consumers and decision makers we

must be aware of this.

Which drug should we take? Which car should we buy?

Where will the economy

go? Who is infected with a particular deadly disease? These are all

examples of questions which are usually relegated to the statistician

for analysis and dissemination. This lecture will attempt

to introduce the beginning to student some of the reasoning behind the

necessity of

statistical inference.

In order to realistically understand

the subject of statistics

it is important to

appreciate the rationale behind why

and how statistics is used by the world, at large.

That is, why do we need statistics anyway?

This, perhaps, is a bit

philosophical, yet I can not over emphasize

the need for thinking along these lines. Without

proper perspective, statistics

becomes a mere a mathematical exercise, diverging

from the true nature of the subject.

In order to begin our analysis as to why statistics is

a necessary type of reasoning we must begin by addressing

the nature of science and experimentation. A characteristic

method used by Scientists is to study a relatively small collection
of

objects, say 2500 people, and a characteristic, say longevity,

and through experimentation

or observation, draw a conclusion appropriate for

the entire class of objects (i.e., people, in general).

For example,

suppose a study published results suggesting
*people who own pets live longer.*

Would this mean that

all people who own pets

are likely to live long lives?

Does owning a pet *cause* longevity?

Suppose the people in the study, by chance, were on the

whole, very healthy people, and therefore lived long lives:

Would this invalidate the researcher's assertion that people who own

pets live longer?

The obvious problem with this type of reasoning is that these issues

can never be proved absolutely. This type of scientific reasoning is
called
**inductive reasoning** and is inherently flawed. One can never
study a

sample and expect conclusions to hold true for the entire population
with

absolute certainty. This is exactly why statistics is needed.

In contrast to the lack of certainty associated with inductive
reasoning,

the type of logic used in Mathematics is absolutely certain. The

mathematician begins with general principles and logically concludes

more specific relationships.

This type of reasoning from the general to the particular is called
**deductive reasoning**.

A rather simplistic

(but nevertheless correct)

example is based on the principle that two numbers can be added
in any

order, thereby giving the same sum.

This is called the axiom of commutativity.

An example of deductive reasoning would

be to assert that since this holds for any two numbers, surely this

must hold for

the numbers

two and three, in particular. We are, therefore,

absolutely certain that

2 + 3 = 3 + 2, given the axiom of commutativity.

In its applied form,

statistics

then becomes a bridge between the inductive uncertainty

of science and the deductive certainty of mathematics. In his classic
book,
*The Design of Experiments*,

Sir Ronald A. Fisher expresses this idea beautifully:

We may at once admit that any inference from the particular to the general must be attended with some degree of uncertainty, but this is not the same as to admit that such inference cannot be absolutely rigorous, for the natureand degree of the uncertainty may itself be capable of rigorous expression.

Statistics, therefore, is the mathematical method by which the

uncertainty inherent in the

scientific method is rigorously quantified.