Types of Research
Studies
o
Qualitative
research approaches
o
Quantitative
research approaches
o
Classifying
the six types of research
o
Exercise on
classifying research by type
Six
types of research studies
The
qualitative versus quantitative approach to the classification of
research activities classifies all research studies into one of six categories
Qualitative
approach
The qualitative approach involves the collection of extensive narrative data in
order to gain insights into phenomena of interest, data analysis includes the
coding of the data and production of a verbal synthesis (inductive process)
- Historical research
- Qualitative research
Quantitative
approach
The quantitative approaches involve the collection of numerical data in order
to explain, predict, and/or control phenomena of interest, data analysis is
mainly statistical (deductive process)
- Descriptive research
- Correlational research
- Causal-comparative research
- Experimental Research
Qualitative
research approaches
Historical research
and qualitative research are the two types of research classified as
qualitative research approaches.
Historical
research is involved with
the study of past events.
The following are
some examples of historical research studies mentioned by Gay.
1.
Factors
leading to the development and growth of cooperative learning.
2.
Effects
of decisions of the United States Supreme Court on American Education.
3.
Trends
in reading instruction, 1940-1945.
Qualitative
research, also referred to
as ethnographic research, is involved in the study of current events rather
than past events. It involves the collection of extensive narrative data
(non-numerical data) on many variables over an extended period of time in a
naturalistic setting. Participant observation, where the researcher lives with
the subjects being observed is frequently used in qualitative research. Case
studies are also used in qualitative research.
Some examples of
qualitative studies mention by Gay are:
1.
A case
study of parental involvement at a magnet school.
2.
A
multicase study of students who excel despite nonfacilitating environments.
3. The teacher as researcher: Improving
students' writing skills.
Quantitative
research approaches
Descriptive
research involves
collecting data in order to test hypotheses or answer questions regarding the
subjects of the study. In contrast with the qualitative approach the data are
numerical. The data are typically collected through a questionnaire, an
interview, or through observation.
In descriptive
research, the investigator reports the numerical results for one or more
variables on the subjects of the study.
Some examples of
descriptive research studies mentioned by Gay are:
1.
How do
second-grade teachers spend their time?
2.
How
will citizens of Yorktown vote in the next
election?
3.
How do
parents feel about a 12-month school year?
Correlational
research attempts to
determine whether and to what degree, a relationship exists between two or more
quantifiable (numerical) variables. However, it is important to remember that
just because their is a significant relationship between two variables it does
not follow that one variable causes the other. When two variables are
correlated you can use the relationship to predict the value on one variable
for a subject if you know that subject's value on the other variable.
Correlation implies prediction but not causation. The investigator frequently
uses the correlation coefficient to report the results of correlational
research.
1.
Some
examples of correlational research mentioned by Gay are:
2.
The
relationship between intelligence and self-esteem.
3.
The
relationship between anxiety and achievement.
4.
The
use of an aptitude test to predict success in an algebra course.
Causal-comparative
research attempts to
establish cause-effect relationships among the variables of the study. The
attempt is to establish that values of the independent variable have a
significant effect on the dependent variable. This type of research usually
involves group comparisons. The groups in the study make up the values of the
independent variable, for example gender (male versus female), preschool
attendance versus no preschool attendance, or children with a working mother
versus children without a working mother. These could be the independent
variables for the sample studies listed below. However, in causal-comparative
research the independent variable is not under the experimenters control, that
is, the experimenter can't randomly assign the subjects to a gender
classification (male or female) but has to take the values of the independent
variable as they come. The dependent variable in a study is the outcome
variable.
Here are some
examples of causal-comparative research studies mentioned by Gay.
1.
The
effect of preschool attendance on social maturity at the end of the first
grade.
2.
The
effect of having a working mother on school absenteeism.
3.
The
effect of sex (gender) on algebra achievement.
Experimental
research like
causal-comparative research attempts to establish cause-effect relationship
among the groups of subjects that make up the independent variable of the
study, but in the case of experimental research, the cause (the independent
variable) is under the control of the experimenter. That is, the experimenter
can randomly assign subjects to the groups that make up the independent
variable in the study. In the typical experimental research design the
experimenter randomly assigns subjects to the groups or conditions that
constitute the independent variable of the study and then measures the effect
this group membership has on another variable, i.e. the dependent variable of
the study.
The following are
some examples of experimental research mentioned by Gay.
1.
The
comparative effectiveness of personalized instruction versus traditional
instruction on computational skill.
2.
The
effect of self-paced instruction on self-concept.
3.
The
effect of positive reinforcement on attitude toward school.
Classifying
the six types of research - Exercise on classifying research by type
As an exercise,
classify each of the following as primarily:
- A. Historical Research,
- B. Qualitative Research,
- C. Descriptive Research,
- D. Correlational Research,
- E. Causal-comparative Research, or
- F. Experimental Research,
1.
Relationship
between creativity and achievement.
2.
Prediction
of success in physics based on a physics aptitude test.
3.
Effect
of birth order on academic achievement.
4.
Self-esteem
of males versus females.
5.
Attitudes
of parents toward lowering the mandatory school attendance age from 16 to 14
years of age.
6.
The
ethnography of teacher-parent conferences?
7.
Opinions
of principals regarding decentralization of decision-making.
8.
Effects
of assertive discipline on the behavior of hyperactive children.
9.
Relationship
between time to run the 100-yard dash and high jumping performance.
10. Effectiveness of daily homework with
respect to achievement in Algebra I
And the answers
are:
1.
D.
Correlational Research
2.
D.
Correlational Research
3.
E.
Causal-comparative Research
4.
E.
Causal-comparative Research
5.
C.
Descriptive Research
6.
B.
Qualitative Research
7.
C.
Descriptive Research
8.
F.
Experimental Research
9.
D.
Correlational Research
10. F. Experimental Research
Next, we spent some time focusing in on research questions or problem statements ... the "heart & soul" of the whole process
We further focused in by talking about some important components of these research questions/problem statements: namely, variables and hypotheses
Now it's time to move on to the "research design methodology" part of the flowchart. The design methodology (sometimes just called "design") consists of the label(s) that characterize the "general blueprint" of the design. As we'll see, usually more than one design label will apply to a particular study.
As with research questions or problem statements, these design "buzzwords" come in "families." We'll see that many of them "link" to particular "keywords" in our problem statements. Some of them also have to do with the form(s) of data that we are collecting: whether in numbers (quantitative), words (qualitative) or both (multimethod).
Figure 1 illustrates one basic way to start to break down these design
"families:
Now, some design labels apply only to qualitative studies -- while others could apply to a study that's any of the above 3 possibilities. We'll look at the qualitative labels in a future follow-up lesson. For now, let's look at the 2nd possibility: families of design methodology labels that could apply to any/all of the above 3 possibilities.
I. Descriptive Designs
We've already seen these! And yes -- they link to descriptive questions/statements!
Key characteristics: "what is/what are/identifying/exploratory type studies.
Example: This study is to identify the perceived barriers to successful implementation of the Career Ladder Teacher Incentive & Development Program in X School District.
"Identify"/"what is - what are" (the perceived barriers) - > Descriptive problem statement AND also descriptive research design methodology!
Two "sub-types" (add'l. design methodology labels that could apply to "descriptive designs):"
We've seen these too! Just as in the case of "descriptive" designs, these "link" to the keywords of "association," "relationship," and/or "predictive ability" that we've come to associate with "correlational" research questions or problem statements!
III. Group Comparisons
We've briefly talked about "experiments" generally, in terms of "key features" such as the following:
Next time we'll look at some terminology for the "qualitative"
branch of design families!
Ah, the classic question! "What is research?!"
These, then, would be the key steps in the research process:
For each of the following scenarios, please identify any/all research design
methodology labels which might apply to the particular study.
1. The researchers hypothesized that peer evaluation as
part of the writing process would lead to improved attitude toward writing and
increased fluency in a sample of ninth grade students. Seven (7) intact
classrooms taught by three (3) different teachers were randomly assigned to
treatment and comparison groups so that each teacher had one class in each
condition. Both groups wrote a first draft of a paper. The treatment group
received peer evaluation training and rewrote their papers based upon
assistance from their peer evaluation group. The control group rewrote their
papers receiving assistance from the teacher only when they requested help. The
subjects responded to two (2) attitude instruments as pretest and posttest
measures. A significant increase in positive attitudes toward writing was
observed for the treatment group. Writing fluency was measured by a count of
words on pre- and post-treatment drafts. There was a decrease in word count
from the first to the last draft for the treatment group.
2. This study examined factors which predict performance on the National Teacher Examinations (NTE) Core Battery. The researchers found strong relationships between a student's undergraduate grade-point average (GPA), American College Test (ACT) subtests, and the NTE Core Battery tests.
3. This study was intended to identify high school students' attitudes toward school policies and practices. Subjects were given a rating scale instrument listing specific school policies and asked to rate each one on a 5-point scale ranging from "strongly disagree" to "strongly agree." A typical sample item being rated is as follows: "Students in my school are given enough responsibility in establishing rules of conduct."
4. This study was intended to identify the types and effectiveness of various forms of positive teacher reinforcement. Teams of researchers developed checklists of such positive behaviors and recorded types and frequency of occurrence in a sample of classroom sessions they attended.
5. The purpose of the study was to determine if Method A of teaching reading would produce superior results in terms of reading comprehension than Method B. One thousand (1,000) second graders were randomly chosen to participate in the study. These second graders were then randomly assigned to either Method A or Method B. Both groups were given a baseline pretest of reading comprehension, as well as the same posttest.
6. A researcher is interested in finding out if children who have attended nursery school perform better in reading in the first grade than those who have not attended nursery school. He/she compares the mean reading scores of both groups.
7. The purpose of this study was to determine if selected students whose homes were called, using a computer-assisted telecommunications device, on days when they were not in school would show an expected difference in school attendance, as compared with selected students whose homes were not called. One hundred and fifty (150) students were chosen at random at the beginning of the school year to serve as the baseline group. No calls were made to the students to were absent in this baseline group. The same random selection procedure was followed for selecting the one hundred and fifty (150) students in the "treatment" group. For this group, each of the students was called at the end of the day(s) for which he/she was absent from school, using the automatic dialing device.
HINT: Think about 'researcher's power to 'form' groups here! This one is a bit 'subtle' in that regard!
8. This study was intended to determine how much knowledge of world geography children have in the third grade. To address this problems statement, students in the sample were administered a paper-and-pencil questionnaire containing basic geography concepts.
9. A teachers' union wishes to determine whether there is a difference between elementary and secondary teachers in their propensity to run for and assume office. This information (numbers of elementary and secondary teachers who have run for, and/or assumed, office) is obtained from the school district office records, and appropriate statistical tests of between-group difference are run on these data.
10. The same teachers' union also wishes to determine if there is a relationship between the number of union meetings that teachers attend, and the length (in years) that they have been members of the union.
First of all, I'd like to introduce the concept of a "case study design." This term may, or may not, apply to a given qualitative study.
Warning ... ya' gotta be a lil' bit 'orange' in True Colors personality lingo to love the following definition ... but 'tis true!
According to qualitative research expert Sharan Merriam and others, "a 'case' is whatever you define it to be!"
Yup ... that could mean:
Now ... once you have your 'case' operationally defined, you can further characterize your qualitative case study design along one of two dimensions ... therefore, if we can picture 'crossing' these two dimensions, your case study will probably fall into one of the following four cells of Table 1 below. This terminology comes from a superstar qualitative research author named Robert K. Yin -- those of you going on to take qualitative research with me will definitely be hearing more about him! Before we proceed to cycle through this table and give examples of each of the four possible "combinations" of types of qualitative case study designs, we need to jump ahead just a bit and introduce some sampling terminology that we'll revisit next time when we talk about sampling procedures!
This is the concept of a "stratified sample" and "strata" generally.
Quite often, it may be the case that we go ahead and draw a simple random sample of subjects (again, these don't have to be 'persons', but since for most of us they will be, I'll go ahead and use 'personal referents' when discussing populations and samples!) BUT at the same time we realize that we may not really have one "alike to one another" sample ("homogeneous") -- but rather subgroups within that sample that are more similar to one another than to other subgroups!
Example (of "stratum" discussion, con't.): we draw subjects as
follows but realize that men may differ from women with respect to what we're
studying!
We realize that we don't really have one "homogeneous" or
similar entire sample pool, but rather subgroups (in this case, we have
"stratified by gender").
Women might be "more alike" to one another than they would be to men with regard to whatever we're studying (e.g., attitudes).
In addition, by choosing to "stratify" we recognize that if the N's or frequency counts in the various strata are vastly unequal to begin with (say, 9 men to 1 woman -- yeah, RIGHT!!!), then if we pool across them & draw a simple random sample we might accidentally "undersample" the women (since their numbers were so relatively low to begin with). So by stratifying and then drawing "proportionally" and at random from within each 'stratum,' we are building in some add'l. 'insurance' that we'll 'fairly represent' each stratum in the overall sample drawn.
P.S. Which of the design families we talked about last time (Families of Research Design, Part I) seems intrinsically linked to such stratification? (pre-breaking into subgroups and then looking for between-group differences)?
Hope you said "ex post facto" (also called "causal comparative!") At least it's true in the above case!
Subjects already "came assigned" to their gender; and
You sought to determine if the gender subgrouping "makes a difference" on some outcome(s) or dependent variable(s).
To recap (and again, we'll revisit this issue soon when we get into "Sampling Procedures"):
"Stratifying" - breaking down the total population and/or sample into subgroups.
Singular form: "stratum" & plural form: "strata"
But back to our four qualitative case-study design possibilities!
* Please note the two dimensions along which we're classifying these qualitative case studies. That is:
To illustrate, suppose my goal as a researcher is to evaluate the perceived impact of the Arizona Career Ladder (CL) Teacher Incentive and Development Program on teachers' motivation and satisfaction levels.
My "case" will be defined as the school district in which the CL Program is being offered.
CELL #1: Single case (*** AND IF YOU CHOOSE TO DEFINE YOUR 'CASE' AS A 'PLACE,'OR 'SETTING,' YOU MAY ALSO SAY 'SITE' FOR 'CASE!'), holistic design:
I travel to the Peoria School District (ONE CASE/SITE) where the CL program is currently being implemented and interview a small select sample of "teachers who are currently on the plan" (ONE INTACT, HOMOGENEOUS SAMPLE -- NOT BROKEN DOWN FURTHER, AS YOU'LL SEE FOR COMPARISON'S SAKE IN CELLS #3 AND #4!)
CELL #2: Multiple case (or in this instance, "site"), holistic design:
The only difference between this and Case # 1 is that I'll also want to cross-check vs. the possibility that there could be 'setting' or 'situational' factors -- e.g., how much of what I hear could be due to Peoria, vs. the CL in general?! So I ALSO travel to Tucson and Window Rock (perhaps these sites were chosen at random; or judgmentally, to balance on some needed factors like rural vs. urban) BUT still interview the same 'overall' sample subjects: That is, "Teachers currently on the CL plan!"
More than one site/case: multiple case AND
Still a "pooled" sample at each site: holistic
Cell #3: Multiple case (site), embedded:
Still the same three districts (cases; sites) mentioned above -- BUT suppose now that I realize "yrs. of teacher experience" with the CL incentive program might "make a difference" in terms of teachers' attitudes. Simply put: would "newbies" be likely to hold different attitudes than "old-timers?"
So -- I choose to stratify my interview subject pool as follows:
Now: Still multiple site/case: the 3 school districts; AND
Embedded (not just one "homogeneous" sample pool but stratified or broken down to see if the stratification factor "makes a difference" with respect to what we're studying (e.g., teachers' attitudes)
Cell # 4: Single-case (site), embedded design:
Same stratification by "yrs. of experience with CL": embedded; but
Back to only doing the study (teacher interviews) within a single case (just in Peoria for now -- perhaps the researcher him/herself, or someone else, will 'replicate and extend' to other school districts in a follow-up study -- by the way, Robert Yin calls this "cross-case analysis")
***: And there you have the qualitative family design possibilities!
What considerable mental RAM-power you now have, my research
superstars! I salute you!!!
Using a research idea/situation of your own choosing, can you illustrate the
four possibilities of qualitative design terminology shown in Table 1 of the
lesson by 'cycling through' and varying the 'tale' the way I did for the Career
Ladder attitudinal interviews example? I'll leave the 'scenario' to your group
to produce!
Please don't forget - this will involve the following steps:
Research
Design
Families
of Research Designs - Part I
Just to put things in perspective, cyberspace superstars...
We've looked at an overall flowchart or schematic of the entire
research design and analysis process. Next, we spent some time focusing in on research questions or problem statements ... the "heart & soul" of the whole process
We further focused in by talking about some important components of these research questions/problem statements: namely, variables and hypotheses
Now it's time to move on to the "research design methodology" part of the flowchart. The design methodology (sometimes just called "design") consists of the label(s) that characterize the "general blueprint" of the design. As we'll see, usually more than one design label will apply to a particular study.
As with research questions or problem statements, these design "buzzwords" come in "families." We'll see that many of them "link" to particular "keywords" in our problem statements. Some of them also have to do with the form(s) of data that we are collecting: whether in numbers (quantitative), words (qualitative) or both (multimethod).
- Quantitative: data in numbers;
- Qualitative: data in words;
- Multimethod: data in both forms.
Now, some design labels apply only to qualitative studies -- while others could apply to a study that's any of the above 3 possibilities. We'll look at the qualitative labels in a future follow-up lesson. For now, let's look at the 2nd possibility: families of design methodology labels that could apply to any/all of the above 3 possibilities.
FAMILIES OF DESIGN
METHODOLOGY
THAT CORRESPOND TO QUANT/QUAL/MULTIMETHOD STUDIES
Most of these, as we'll see, "link" to certain
"keywords" in the research question or problem statement! THAT CORRESPOND TO QUANT/QUAL/MULTIMETHOD STUDIES
I. Descriptive Designs
We've already seen these! And yes -- they link to descriptive questions/statements!
Key characteristics: "what is/what are/identifying/exploratory type studies.
Example: This study is to identify the perceived barriers to successful implementation of the Career Ladder Teacher Incentive & Development Program in X School District.
"Identify"/"what is - what are" (the perceived barriers) - > Descriptive problem statement AND also descriptive research design methodology!
Two "sub-types" (add'l. design methodology labels that could apply to "descriptive designs):"
- Survey - This label also applies to any study in which data or responses (be they quant/qual/both) are recorded via any form of what we think of as "survey instrumentation."
You've probably seen (more than you care to think
about! if you've been 'approached' by a 'needy dissertation stage doctoral
student' to participate in his/her study!) such surveys. They can take many
forms:
- Check-off items (e.g., gender, position);
- Fill-in-the-blank items;
- Likert-type scales (e.g., on a 5-point scale, say, from "strongly disagree" to "strongly agree," you're asked to circle or check your opinion regarding a statement such as, "The Career Ladder Teacher Incentive and Development Program provides ample opportunity for teacher advancement in my district")
- Open-ended fill-in items (you're asked to give a response in your own words, using the back of the survey sheet or extra paper if necessary; something like "Please state the three main reasons you chose to apply for the Career Ladder Teacher Incentive and Development Program this year.")
Types of
survey researches
While often these surveys are paper-&-pencil in
nature (e.g., you're handed one or receive it in the mail & asked to fill
it out and return it to the researcher), they are sometimes
"administered" orally in a face-to-face or telephone interview (e.g.,
the researcher records your answers him/herself).
There are other variations on survey-type
questions; the above are just examples of the most common forms and scaling of
such responses.
If the responses to our earlier example were
collected in the form of a survey -- be it, say, Likert-scaled attitudinal
items and/or open-ended questions where the teachers are asked to share the
perceived barriers in their own words -- then the study would be characterized
as a descriptive survey design methodology.
- Observational - In these design methodologies, instead of administering a survey instrument, the researcher collects data by observing/tallying/recording the occurrence or incidence of some outcome -- perhaps with the aid of assistants.
He/she might want to identify the most frequently
occurring type(s) of disruptive behavior in a particular classroom. With clear
prior agreement on what constitutes such "disruptive behavior"
(operational definitions of our variables are important, remember?! It becomes
an issue of "reliability," or verifiability that "we saw what we
saw" vs. "our own bias" of what constitutes this disruptive
behavior!), the researcher could develop a listing of such behaviors and
observe and record the number of times each one occured in a particular
observation session in a classroom. (Again, he/she might wish to 'compare
notes' with assistants in order to enhance reliability or verifiability --
e.g., as a cross-check for accuracy).
This type of research would warrant the design
methodology label of not only "descriptive" (due to the
'identify/what is - what are [the most frequently occurring ... ]?') but also
"observational" due to the recording/tallying protocol.
(By the way, qualitative-type observations can also
be recorded. They don't have to be strictly numeric tallies. Examples that come
to mind include case notes of counselors, where they record their perceptions
in words.)
II. Correlational Designs We've seen these too! Just as in the case of "descriptive" designs, these "link" to the keywords of "association," "relationship," and/or "predictive ability" that we've come to associate with "correlational" research questions or problem statements!
III. Group Comparisons
We've briefly talked about "experiments" generally, in terms of "key features" such as the following:
- tight control (the researcher attempts to identify in advance as many possible 'contaminating' and/or confounding variables as possible and to control for them in his/her design -- by, say, building them in and balancing on them -- equal numbers of boys and girls to 'control for gender' -- or 'randomizing them away' by drawing a random sample of subjects and thereby 'getting a good mix' on them -- e.g., all levels of 'socioeconomic status')
- because of the preceding control, the 'confidence' to make 'cause/effect statements'
That is, we begin to get the idea of 2 or more groups,
as balanced and equivalent as possible on all but one "thing:" our
"treatment" (e.g., type of lesson, type of counseling). We
measure them before and after this treatment and if we do find a difference in
the group that 'got the treatment,' we hope to attribute that difference to the
treatment only (because of this tight control, randomization, and so
forth).
Now ... there are actually two "sub-types" of experimental
designs. Plainly put, they have to do with how much 'control' or 'power' you as
the researcher have to do the above randomization and grouping! - True experimental - If you can BOTH randomly draw (select) individuals for your study AND then randomly assign these individuals to 2 or more groups (e.g., 'you have the power to make the groups' yourself!), then you have what is known as a true experiment.'
In the preceding scenario, the researcher first:
- Randomly selected subjects A through F from the larger population; AND
- Then randomly assigned these individuals to (experimenter-formed) groups. In our example, by coin-flipping or some other random procedure, Subjects A, D & E "landed" in the control group (e.g., the class that will get the traditional lecture), while Subjects B, C, & F "landed" in the experimental or treatment group (e.g., the researcher-formed class that will get the hands-on science instruction, say).
The two levels of "randomization" help to
ensure good control of those pesky contaminating or confounding variables,
don't they?! You're more likely to get a "good mix" on all those other
factors when you can randomly draw your subjects and also randomly assign them
to groups that you as the researcher have the "power" to form!
Ah...but ivory-tower research is one thing; real
life quite another ... !
What if you get the OK to do your research within a
school district, but the sup't. says, "Oh no! I can't let you be
disrupting our bureaucratic organization here and "making your own 4th
grade classrooms" for your study! That's way too disruptive! No, no, the
best you can do is to randomly select INTACT existing 4th grade classrooms and
then go ahead and use all the kids in those randomly drawn GROUPS
instead!"
Which brings us to the 2nd variant of
"experimental designs:"
- Quasi-experimental - what you are 'randomly drawing' (selecting) is NOT INDIVIDUALS but INTACT (pre-existing) GROUPS! These could be existing classrooms, clinics, vocational education centers, etc. In other words, you "lose" the power to "make your own groups" for your study!
Here (for the quasi-experiment), you randomly draw
intact groups (e.g., from all the 4th grades in the district, you draw 4 of
them at random) and then flip a coin or use some other random procedure to
assign the pre-existing 4th grades to either the "treatment" or
"control" conditions. (In our example Grades A and C "land"
in the traditional lecture method (control), while Grades B and D end up in the
hands-on science instruction (e.g., the "treatment" or the
"experimental" group).
Do you see how this is different from the
"true" experiment? In the "true" experiment, you selected
the children themselves (subjects) at random and then "had the power"
to in essence "form" your own "4th grades" by assigning the
individual kids themselves randomly to either the control or the experimental
conditions.
Here, though, the 'best you can do' (again, often
for practical reasons such as access to sites, permission, etc.) is draw not
individual kids but the GROUPS themselves (pre-existing 4th grade classrooms)
at random and then in step # 2 assigning NOT the INDIVIDUAL KIDS but rather the
WHOLE GROUPS to either the treatment or control conditions.
Open the link below for more detailed information
about Quasi-Experimental design
P.S. Do you see how this one-step loss of
randomization may mean a bit less control over those pesky contaminants?!
By forming your own groups you have a greater likelihood of "getting a
good mix on all other stuff". But here, you've got to "live with the
existing groups as is." And suppose that in the above scenario, 4th Grades
B & D also happen (quite by accident, but welcome to 'real life!') to have
a higher average I.Q. of 15 points than A & B! Now we've got a contaminant!
Did the kids do better because of the hands-on science lesson -- or because of
their inherently higher aptitude, intelligence or whatever?!
But at least we still have that last step: random
assignment to either the experimental or control conditions!
Remember ... again ...
- For true experiments, we're randomly assigning individuals to treatment vs. control; and
- For quasi-experiments, we're randomly assigning intact/pre-existing groups to treatment vs. control.
Well -- we lose that "random assignment"
property in the 3rd "family" of group comparison design
methodologies!
- Ex post facto (also called "causal comparative") - really no 'random anything!' We identify some sort of outcome and wonder 'what makes it vary like that?' Could it be some pre-existing grouping? For instance, if we 'divided' or 'pile-sorted' the responses by gender, would that account for the difference we see?
Thus, there is no treatment either! Simply
an attempt to see if a grouping that we had no prior control over seems to
"make a difference" on some outcome(s)!
The keyword "difference" (by grouping)
and no treatment would be the tip-off to an ex post facto or causal-comparative
study design.
And -- regarding the grouping -- maybe this rather
silly example will make the point! And help you to identify if you are in such
a situation of "no-control-over-grouping:"
You wish to study whether preschoolers from
single-parent homes are different in terms of emotional readiness for
kindergarten than those of two-parent homes.
Now ... you couldn't go to prospective subjects'
homes and say, "OK, now you've got to get divorced ... and YOU have to
stay married ... 'cuz that's how you came up in the random assignment!"
I don't think so ... !!! Same thing with
"gender:" you took it "as is" (e.g., those subjects in
essence 'self-selected into their gender grouping). You had no prior control
over 'making' them 'be' one gender or the other but rather took those groups
'as is' and kind of pile-sorted some response(s) by gender to see if it 'made a
difference' on some outcome! Indeed ... the literal Latin translation of
"ex post facto" is "after the fact." This shows
YOUR role in the 'grouping' process as the researcher! You didn't 'assign'
them into any one group, randomly or otherwise. Instead, you came in
"after the fact" and wished to see if that self-determined grouping
made a difference on some outcome(s) that you are studying!
As you can imagine -- even bigger problems with
contaminating variables! There is no randomization or control here!
Thus the name "causal comparative"
is sort of a misnomer. You are indeed "comparing" two or more
"pre-formed" groups on some outcome(s). But due to that lack
of randomization and control, you can't really use this design to study
"cause/effect" types of research questions or problem statements.
There are generally too many uncontrolled, unrandomized contaminating
variables that may have entered the picture to confidently make 'strong'
cause/effect statements!
Nonetheless, given the circumstances, this type of
design might be "the best you can do." Group differences on some
outcome(s) might indeed be interesting to study even though you had little or
no "control" in the situation.
To summarize, for the "group comparison"
family of designs:
Kind of Study
|
Method of Forming Groups
|
Ex Post Facto
(Causal Comparative) |
Groups Formed
|
True Experiment
|
Random Assignment of Individual to
"Researchr-Made" Groups
|
Quazi-Experiment
|
Random Assignment of Intact Groups
|
It
Starts with a Question.
Ah, the classic question! "What is research?!"
- In education, as in all other topic areas, the key thing to remember is: IT ALL STARTS WITH A QUESTION! (need to know, curiosity, etc., etc.!)
- if it is in a question form, we call it a research question: e.g., "What is the relationship between motivation to teach and satisfaction level as a first-year teacher?"
- if it is in a declarative sentence form, we call it a problem statement: e.g., "This study is to determine the relationship between motivation to teach and satisfaction level as a first-year teacher."
- I consider the above two forms to be EQUIVALENT and leave it up to YOU as to which way you'd prefer to state your "curiosity." But some professors (and particularly, your dissertation chair) may have a preference as to one form or the other. Guess the moral is: "Know thy audience (and act accordingly)!"
- P.S. As a result of my own non-preference, I'll go ahead & use the terms interchangeably; e.g., "answering your research question(s)," vs. "addressing your problem statement." Remember -- the ONLY difference is in the sentence structure! (question: interrogative sentence; statement: declarative sentence)
- I've asked this "loaded question" in the past as to "what is research? what is its driving force?" And guess what answer I typically get: STATISTICS or DATA ANALYSIS!!!
- It's admittedly natural to give more weight to the "scariest" or "most complicated" part! BUT -- the statistics, as in all other parts of the research process, all centers around the RESEARCH QUESTION OR PROBLEM STATEMENT!!!!
These, then, would be the key steps in the research process:
- Identify your research question or problem statement.
This can come from:
- Something in prior research that piques your interest;
- A need to know based on practice (e.g., you observe a problem at work and wish to understand its causes better; and/or need to develop a solution to the problem)
- "Just because" curiosity about something!
- Specify the related
research design ("blueprint" or "plan of
attack" that you'll use to obtain the answer(s) to your research
question(s)/problem statement.
We'll learn that these research designs come in "families," some of which "cleanly link" to given research questions or problem statements.
Read about the major elements of a research design: from
another instructor at Cornell
University.
e.
Population and Sample: the WHO of your study
(the population being "to whom do you wish to project or generalize your
findings?" and the sample being "the subjects you actually observe,
interview, send surveys to, etc., etc., or otherwise 'study' to get an answer
to your question")
For practical purposes, we'll see later on that it might not be too feasible, time- and/or cost-wise to personally study EVERYONE to whom we wish to project or generalize! The task, then, will be to select or "draw" a smaller subset of subjects to actually "use" in our study. This is called a sample. We'll be learning various ways to draw a sample, as well as the relative tradeoffs of each different method.
Also -- please notice that I said "WHO" when it came to population and sample. These don't HAVE to be PERSONS (although they usually are: e.g., "all 4th-grade special education students enrolled in Arizona public schools for the 1993-94 academic year"); they CAN be THINGS (e.g., "all related special education curricula being used for/with these students"). In this case, we can say "WHAT" instead of "WHO." But since in the majority of "real-life" cases we are dealing with PERSONS instead of THINGS, I'll use "who" and "subjects" for population and sample references. And it'll be understood that these CAN be THINGS too!
For practical purposes, we'll see later on that it might not be too feasible, time- and/or cost-wise to personally study EVERYONE to whom we wish to project or generalize! The task, then, will be to select or "draw" a smaller subset of subjects to actually "use" in our study. This is called a sample. We'll be learning various ways to draw a sample, as well as the relative tradeoffs of each different method.
Also -- please notice that I said "WHO" when it came to population and sample. These don't HAVE to be PERSONS (although they usually are: e.g., "all 4th-grade special education students enrolled in Arizona public schools for the 1993-94 academic year"); they CAN be THINGS (e.g., "all related special education curricula being used for/with these students"). In this case, we can say "WHAT" instead of "WHO." But since in the majority of "real-life" cases we are dealing with PERSONS instead of THINGS, I'll use "who" and "subjects" for population and sample references. And it'll be understood that these CAN be THINGS too!
- The "Instrumentation" or
"Sources of Information" -- e.g., your "hands-on
tools" for obtaining information needed for and about the
population and sample in order to answer your research question(s)!
"Instrumentation" is any such tool involving "live and in-person" collection of information. Some examples are as follows: - Mass-mailed rating scale surveys:
- Surveys with open-ended, fill-in-the-blank items;
- Questions about background and purchasing habits asked of subjects in a telephone interview;
- Open-ended questions about people's attitudes, feelings, likes and dislikes asked of 6-12 subjects in a relaxed setting for about one hour (this is called a "focus group interview");
- The same types of open-ended attitudinal questions asked of subjects one by one, either in person or by telephone (this is called an "individual interview");
- Your log book of notes of your observations taken of discipline methods used by a teacher in a primary grade classroom.
A Discussion of Survey
Research
There are many other examples. Do you see how, in
each of the above cases, it involves "live and in person" collection
of data -- even if, as in the case of the mass-mailed surveys, you may never
actually meet the subjects? But it's still a "live" person giving you
the answers (hopefully, anyway ... !!!).
"Sources of Information," in contrast, involve getting your data from EXISTING sources -- e.g., what we call "archival information." The data already exist and you are locating, identifying and 'pulling from' these sources to fit YOUR research needs. Just a few examples of such sources of information are as follows:
"Sources of Information," in contrast, involve getting your data from EXISTING sources -- e.g., what we call "archival information." The data already exist and you are locating, identifying and 'pulling from' these sources to fit YOUR research needs. Just a few examples of such sources of information are as follows:
- Pulling off the 4th grade ITBS scores in reading and math for the last 5 years from existing computerized databases in the school district office;
- Obtaining diaries written and kept by an individual who may be deceased but who is the focus of your area of interest -- and reading and selectively making notes and pulling quotations from these diaries;
- Obtaining policies on hiring and firing of school district classified staff from three preselected district offices -- and again, selectively 'reading and pulling' from these the information that you need to answer your particular research question(s).
Do you see how, in the above examples, the
data/information/records, etc., ALREADY EXISTED (e.g., YOU weren't the
'original compiler') and may in fact have been created for totally different
purposes at the time? But now you are needing to locate and use these sources
to address your own particular, unique problem statement or needs to know.
Identify
the Research Design Methodology
Just to remind you, these include:
Descriptive
Survey
Observational
Correlational
True Experimental
Quasi-Experimental
Ex Post Facto (Causal Comparative)
Survey
Observational
Correlational
True Experimental
Quasi-Experimental
Ex Post Facto (Causal Comparative)
2. This study examined factors which predict performance on the National Teacher Examinations (NTE) Core Battery. The researchers found strong relationships between a student's undergraduate grade-point average (GPA), American College Test (ACT) subtests, and the NTE Core Battery tests.
3. This study was intended to identify high school students' attitudes toward school policies and practices. Subjects were given a rating scale instrument listing specific school policies and asked to rate each one on a 5-point scale ranging from "strongly disagree" to "strongly agree." A typical sample item being rated is as follows: "Students in my school are given enough responsibility in establishing rules of conduct."
4. This study was intended to identify the types and effectiveness of various forms of positive teacher reinforcement. Teams of researchers developed checklists of such positive behaviors and recorded types and frequency of occurrence in a sample of classroom sessions they attended.
5. The purpose of the study was to determine if Method A of teaching reading would produce superior results in terms of reading comprehension than Method B. One thousand (1,000) second graders were randomly chosen to participate in the study. These second graders were then randomly assigned to either Method A or Method B. Both groups were given a baseline pretest of reading comprehension, as well as the same posttest.
6. A researcher is interested in finding out if children who have attended nursery school perform better in reading in the first grade than those who have not attended nursery school. He/she compares the mean reading scores of both groups.
7. The purpose of this study was to determine if selected students whose homes were called, using a computer-assisted telecommunications device, on days when they were not in school would show an expected difference in school attendance, as compared with selected students whose homes were not called. One hundred and fifty (150) students were chosen at random at the beginning of the school year to serve as the baseline group. No calls were made to the students to were absent in this baseline group. The same random selection procedure was followed for selecting the one hundred and fifty (150) students in the "treatment" group. For this group, each of the students was called at the end of the day(s) for which he/she was absent from school, using the automatic dialing device.
HINT: Think about 'researcher's power to 'form' groups here! This one is a bit 'subtle' in that regard!
8. This study was intended to determine how much knowledge of world geography children have in the third grade. To address this problems statement, students in the sample were administered a paper-and-pencil questionnaire containing basic geography concepts.
9. A teachers' union wishes to determine whether there is a difference between elementary and secondary teachers in their propensity to run for and assume office. This information (numbers of elementary and secondary teachers who have run for, and/or assumed, office) is obtained from the school district office records, and appropriate statistical tests of between-group difference are run on these data.
10. The same teachers' union also wishes to determine if there is a relationship between the number of union meetings that teachers attend, and the length (in years) that they have been members of the union.
Part 2
Onward to Part II, research fans! The last time, we looked
at some families of research design methodology. These could be quantitative,
qualitative or both ("multimethod") in nature.
There is some unique, & also rather new, terminology for
certain qualitative designs. That's what we'll briefly look at here! First of all, I'd like to introduce the concept of a "case study design." This term may, or may not, apply to a given qualitative study.
Warning ... ya' gotta be a lil' bit 'orange' in True Colors personality lingo to love the following definition ... but 'tis true!
According to qualitative research expert Sharan Merriam and others, "a 'case' is whatever you define it to be!"
Yup ... that could mean:
- a place or situation (this is how we usually think or as a "case:" e.g., School Y, Town Z, Clinic X);
- a particular progam, policy or procedure (for instance, a curriculum; a set of rulings on hiring/firing teachers; a method of computer-assisted instruction; a counseling procedure to be applied with anorexic teen girls);
- an individual (this might be the case for an intensive study of one person -- either historically -- collecting all records, writings, second-person account interviews about him/her, say -- or currently. You can even do a study with one individual in a sort of 'experimental' sense! This is called a 'single-subject design!')
Now ... once you have your 'case' operationally defined, you can further characterize your qualitative case study design along one of two dimensions ... therefore, if we can picture 'crossing' these two dimensions, your case study will probably fall into one of the following four cells of Table 1 below. This terminology comes from a superstar qualitative research author named Robert K. Yin -- those of you going on to take qualitative research with me will definitely be hearing more about him! Before we proceed to cycle through this table and give examples of each of the four possible "combinations" of types of qualitative case study designs, we need to jump ahead just a bit and introduce some sampling terminology that we'll revisit next time when we talk about sampling procedures!
This is the concept of a "stratified sample" and "strata" generally.
Quite often, it may be the case that we go ahead and draw a simple random sample of subjects (again, these don't have to be 'persons', but since for most of us they will be, I'll go ahead and use 'personal referents' when discussing populations and samples!) BUT at the same time we realize that we may not really have one "alike to one another" sample ("homogeneous") -- but rather subgroups within that sample that are more similar to one another than to other subgroups!
Table 1.
Four Possible Combinations of Labels
of Qualitative Case Study Designs:
broken down (or "stratified") by:
1) Number of Cases;
2) Number of Subgroups ("Strata") of Subjects
Four Possible Combinations of Labels
of Qualitative Case Study Designs:
broken down (or "stratified") by:
1) Number of Cases;
2) Number of Subgroups ("Strata") of Subjects
No. of Cases/No. of "Strata of Subjects
|
One Case
|
More than One Case
|
One "Stratum" of Subjects (pooled across all
subjects)
|
Single Case, Hollistic Design
|
Mulitple Case, Hollistic Design
|
More than One "Stratum" of Subject (subjects are
further subdivided into subgroups)
|
Single Case, Embedded Design
|
Mutliple Case, Embedded Design
|
Women might be "more alike" to one another than they would be to men with regard to whatever we're studying (e.g., attitudes).
In addition, by choosing to "stratify" we recognize that if the N's or frequency counts in the various strata are vastly unequal to begin with (say, 9 men to 1 woman -- yeah, RIGHT!!!), then if we pool across them & draw a simple random sample we might accidentally "undersample" the women (since their numbers were so relatively low to begin with). So by stratifying and then drawing "proportionally" and at random from within each 'stratum,' we are building in some add'l. 'insurance' that we'll 'fairly represent' each stratum in the overall sample drawn.
P.S. Which of the design families we talked about last time (Families of Research Design, Part I) seems intrinsically linked to such stratification? (pre-breaking into subgroups and then looking for between-group differences)?
Hope you said "ex post facto" (also called "causal comparative!") At least it's true in the above case!
Subjects already "came assigned" to their gender; and
You sought to determine if the gender subgrouping "makes a difference" on some outcome(s) or dependent variable(s).
To recap (and again, we'll revisit this issue soon when we get into "Sampling Procedures"):
"Stratifying" - breaking down the total population and/or sample into subgroups.
Singular form: "stratum" & plural form: "strata"
But back to our four qualitative case-study design possibilities!
* Please note the two dimensions along which we're classifying these qualitative case studies. That is:
- whether we have one case (again, however we've defined our case!), vs. more than one! AND
- whether we have chosen to stratify our sample -- vs. pooling across all subjects.
To illustrate, suppose my goal as a researcher is to evaluate the perceived impact of the Arizona Career Ladder (CL) Teacher Incentive and Development Program on teachers' motivation and satisfaction levels.
My "case" will be defined as the school district in which the CL Program is being offered.
CELL #1: Single case (*** AND IF YOU CHOOSE TO DEFINE YOUR 'CASE' AS A 'PLACE,'OR 'SETTING,' YOU MAY ALSO SAY 'SITE' FOR 'CASE!'), holistic design:
I travel to the Peoria School District (ONE CASE/SITE) where the CL program is currently being implemented and interview a small select sample of "teachers who are currently on the plan" (ONE INTACT, HOMOGENEOUS SAMPLE -- NOT BROKEN DOWN FURTHER, AS YOU'LL SEE FOR COMPARISON'S SAKE IN CELLS #3 AND #4!)
CELL #2: Multiple case (or in this instance, "site"), holistic design:
The only difference between this and Case # 1 is that I'll also want to cross-check vs. the possibility that there could be 'setting' or 'situational' factors -- e.g., how much of what I hear could be due to Peoria, vs. the CL in general?! So I ALSO travel to Tucson and Window Rock (perhaps these sites were chosen at random; or judgmentally, to balance on some needed factors like rural vs. urban) BUT still interview the same 'overall' sample subjects: That is, "Teachers currently on the CL plan!"
More than one site/case: multiple case AND
Still a "pooled" sample at each site: holistic
Cell #3: Multiple case (site), embedded:
Still the same three districts (cases; sites) mentioned above -- BUT suppose now that I realize "yrs. of teacher experience" with the CL incentive program might "make a difference" in terms of teachers' attitudes. Simply put: would "newbies" be likely to hold different attitudes than "old-timers?"
So -- I choose to stratify my interview subject pool as follows:
Embedded (not just one "homogeneous" sample pool but stratified or broken down to see if the stratification factor "makes a difference" with respect to what we're studying (e.g., teachers' attitudes)
Cell # 4: Single-case (site), embedded design:
Same stratification by "yrs. of experience with CL": embedded; but
Back to only doing the study (teacher interviews) within a single case (just in Peoria for now -- perhaps the researcher him/herself, or someone else, will 'replicate and extend' to other school districts in a follow-up study -- by the way, Robert Yin calls this "cross-case analysis")
***: And there you have the qualitative family design possibilities!
Cycle
Through Using Qualitative Terminology
To complete this assignment successfully, you should:- Study the assignment carefully
- Complete the assignment as directed
Please don't forget - this will involve the following steps:
- (Of course!) Identifying an overall problem statement or research question! And as part of it ...
- Identifying and 'operationally defining' your case (so that it is clear what it constitutes);
- Identifying a target population/sample;
- Also identifying (for the "embedded" 2nd line of Table 1), some 'basis for stratification' of this target population/sample;
- Finally, making sure there is a qualitative component to your study. Doesn't have to be individual interviews: qualitative means, in the overall sense of the term,"collecting data in words." Just some add'l. examples of qualitative data collection procedures are:
- individual interviews;
- small-group (we'll learn these are called "focus group" in qualitative) interviews;
- open-ended written responses to mail-out surveys;
- observation logs, diaries, etc., where 'what's recorded' consists of words;
- archival data, such as existing documents, policies, letters, memos, etc., from which you 'selectively cull' content that pertains to your particular study/problem statement.
And then please identify how, by changing 'part(s) of your
tale,' you can make one example of each of the four (4) combinations of
qualitative design methodology labels as illustrated in Table 1.
No comments:
Post a Comment