IRPP Survey on Courts
and the Charter
Technical
Documentation
November 2000
All research based on these data must include an acknowledgement such as the following:
Data are from the “IRPP Survey on Courts and the Charter,” which was commissioned by the Institute for Research on Public Policy and conducted by Opinion Search under the direction of Joseph F. Fletcher and Paul Howe. Neither the original investigators nor the sponsoring organization are responsible for the analyses and interpretations presented here.
The “IRPP Survey on Courts and the Charter” was
commissioned by the Institute for Research on Public Policy, a non-partisan,
Montreal-based think tank, to investigate Canadian attitudes towards the
courts, the Charter of Rights and Freedoms and related issues. The telephone
survey of 1,005 Canadians, conducted using computer assisted telephone
interviewing (CATI) equipment, was carried out by Opinion Search, an
Ottawa-based commercial polling firm, from March 1 to March 20, 1999.
The survey, in part, was designed as a
follow-up to the Charter Project (formally called “Attitudes Towards Civil
Liberties and the Canadian Charter of Rights”), a 1987 study involving surveys
of the general population and various groups of decision makers. Some items
from the general population survey of that earlier study were replicated on the
current survey in order to carry out longitudinal analysis.[1]
The documentation provided here is for the 1999 survey only. Documentation and
the data for the 1987 study can be obtained from the Institute for Social
Research at York University in Toronto.
The sample for the survey was chosen according to a two-stage probability selection process. In the first stage, a household was randomly selected using the “Canada Survey Sampler.”[2] In the second stage, an individual within the household was randomly selected. There was no disproportionate sampling of respondents by region. The target population was all Canadian residents, 18 years of age or older.
A total of 1,005 interviews was completed. However, for reasons of economy, the full questionnaire was administered only to a randomly selected sub-sample of 600 respondents. A description of the procedure used to select this sub-sample is provided below (see “Short and Long Versions of the Questionnaire”).
3.
Response Rate
The following table records the final outcome for all telephone numbers in the original sample.
6773 100.0 % TOTAL
DISPOSITIONS
1005
14.8 % 20 COMPLETED INTERVIEW
676
10.0 % 01 NO ANSWER
93
1.4 % 02 BUSY
498
7.4 % 03 ANSWERING MACHINE
897
13.2 % 04 NOT IN SERVICE
488
7.2 % 06 GENERAL CALLBACK
27
0.4 % 07 SPECIFIC CALLBACK
107
1.6 % 08 NQ (MISC.)
104
1.5 % 09 NQ (BUSINESS #)
148
2.2 % 10 FAX/MODEM
13
0.2 % 12 DUPLICATE RECORD
2307
34.1 % 13 REFUSAL
56
0.8 % 14 TERMINATION
93
1.4 % 15 QUOTA CELL FILLED
51
0.8 % 16 WEIRD SAMPLE/WRONG NUMBER
1
0.0 % 18 NQ OUT OF PROVINCE
209
3.1 % 19 LANGUAGE BARRIER
Calculating the response rate simply as the number of completed interviews divided by the number of completed interviews plus the number of refusals produces a response rate of 30% (1005 / (1005+2307)). Other, more conservative methods of calculation would produce lower response rates.
This is a relatively low response rate by academic survey standards. Two measures were undertaken to address this potential problem. First, an analysis of the data from the 1987 Charter Project was conducted by the co-investigators to determine whether there existed any systematic and substantial differences of opinion between easy-to-reach and hard-to-reach respondents. None were found, suggesting that our lower response rate does not bias results. Secondly, weights were applied to the 1999 data in order to bring the sample in line with population parameters. The most important of these weights were designed to compensate for the under-representation of respondents with low levels of education and the over-representation of respondents with high levels of education in the survey sample. Further details on this procedure are provided in the next section.
4.
Weighting
The weighting procedure involved the following
steps:
1)
the
calculation of household weights (“hholdwt”)
2)
the
application of the household weights to the 1,005 cases and the production of
two sets of cross-tabulations with these weights in place: sex by province of
residence, and education level by birth year cohort
3)
the
calculation of sex-province weights (“genwt”) and education weights (“edwt”)
based on a comparison of the cross-tabulations with population data
4)
The
application of the latter weights to the 1,005 cases in addition to the
household weights, i.e. hholdwt * genwt * edwt (to produce “finwtx”).
5)
A
small adjustment (each value multipled by 0.997) to ensure the total number of
cases, when all weights are in place, remains at 1,005 (“finwt”).
6)
A
small adjustment (each value multipled by 0.998) to ensure that the total
number of cases for the sub-sample of 600 respondents remains at 600 when all
weights are in place (“finwt600”).
The weights that should be used are as follows.
When analyzing the full 1,005 cases from the 1999 survey or drawing comparisons
with the 1987 data, use “finwt”. When analyzing the sub-set of 600 respondents
asked the full set of questions, use “finwt600”.
Further details for Steps 1 and 3 in the
weighting procedure are as follows.
It has become standard practice in academic
surveys to apply weights to cases to take into account the number of people
resident in a given household. A modified version of the usual procedure was
used in this case. Respondents were split into two groups: those in households
of one or two and those in households of three or more. The average number of
people in each type of household was calculated (1.71 for small households,
4.02 for large households). These numbers were then used to calculate household
weights in the standard fashion, i.e. as though each person in the small
households represented 1.71 people and each person in the large households represented
4.02 people. Calculations are shown in the following table.
|
Average
no. of people |
N |
Weighted
N |
Adjusted
weighted N |
Weight
(“hholdwt”) |
Small
households |
1.71 |
473 |
808.83 |
276.8184 |
0.58524 |
Large
Households |
4.02 |
522 |
2098.44 |
718.1816 |
1.375827 |
Total |
|
995 |
2907.27 |
995 |
|
4.2 Step 3: Calculation of sex-province weights
and education weights (“edwt”)
Sex-province weights were calculated in the
standard way. The percentage in the sample falling in each cell, defined by sex
and province of residence, was compared to target percentages based on 1998
Statistics Canada data. Weights were calculated for each cell in the survey
sample to match the target percentages.
The relevant population data are shown in the
following table.
Province |
No. of males |
No. of females |
Male, % |
Female, % |
NF |
268332 |
274868 |
0.8857 |
0.9073 |
PE |
66865 |
69335 |
0.2207 |
0.2289 |
NS |
455397 |
480703 |
1.5031 |
1.5867 |
NB |
369949 |
382451 |
1.2211 |
1.2624 |
PQ |
3588943 |
3745157 |
11.8462 |
12.3618 |
ON |
5576323 |
5828477 |
18.4060 |
19.2383 |
MB |
560856 |
580144 |
1.8512 |
1.9149 |
SK |
506904 |
518696 |
1.6732 |
1.7121 |
AB |
1456584 |
1456816 |
4.8078 |
4.8086 |
BC |
1983495 |
2030805 |
6.5470 |
6.7032 |
YK |
15880 |
14885 |
0.0524 |
0.0491 |
NT |
33380 |
31025 |
0.1102 |
0.1024 |
Source: Statistics
Canada (1998) |
|
The resulting weights (“genwt”) are as follows:
Province |
Males |
Females |
NF |
0.9861 |
0.8264 |
PE |
0.7372 |
0.7644 |
NS |
0.7531 |
0.9937 |
NB |
0.7647 |
1.0541 |
PQ |
1.0233 |
1.0237 |
ON |
1.0246 |
1.0420 |
MB |
0.8065 |
0.9137 |
SK |
0.9314 |
0.9029 |
AB |
0.9831 |
0.9636 |
BC |
1.0581 |
1.0495 |
YK |
0.5252 |
0.4923 |
NT |
1.1040 |
1.0261 |
A similar procedure was used for education
weights, though with some important variations. Cells were defined by education
level (finished high school or less, some post-secondary, university degree)
and birth cohort (1946 and earlier, 1947-1958, 1959-1969 and 1970-1981). The
objective was to ensure that the educational distribution within each birth cohort matched certain target percentages.
The reason for applying education weights to
the 1999 data is simply that the educational distribution within the survey
sample does not match population data: those with university education are
over-represented and those with high school or less are under-represented. The
reason why we calculate weights within birth cohorts is that the extent of the
problem varies across birth cohorts (it is greater in the older cohorts).
Therefore, a single set of education weights for all birth cohorts would not be
appropriate.
For the education weights, the target percentages for the younger two cohorts were based on 1996 census data. The target percentages for the older two cohorts were based on cross-tabulations from the 1987 Charter Project, which were quite close to the 1996 census values. The reason for the latter procedure is that it makes birth cohorts from the 1987 and 1999 surveys more directly comparable, by ensuring they are, effectively, of the same educational composition. This facilitates the 1987-1999 comparisons that are an important part of the analysis arising from the current survey. Using this procedure for the younger cohorts would not be appropriate, since the educational composition of those cohorts in the population will likely have changed considerably from 1987 to 1999.
The following table shows the calculations used
to produce education weights:
Target percentages: 2
younger cohorts based on 1996 census data, 2 older based on 1987 data from
Charter Project |
|
||||
|
Birth year
|
|
|||
Educational Attainment: |
1946
and before |
1947-58 |
1959-69 |
1970-81 |
|
Elementary or HS |
0.639 |
0.53 |
0.395 |
0.42 |
|
Some Post-Sec |
0.202 |
0.298 |
0.419 |
0.458 |
|
Univ Degree |
0.159 |
0.172 |
0.186 |
0.122 |
|
|
|
|
|
|
|
Actual percentages in 1999 data |
|
||||
|
Birth year
|
|
|||
Educational Attainment: |
1946
and before |
1947-58 |
1959-69 |
1970-81 |
|
Elementary or HS |
0.444 |
0.387 |
0.32 |
0.442 |
|
Some Post-Sec |
0.275 |
0.374 |
0.411 |
0.421 |
|
Univ Degree |
0.28 |
0.238 |
0.269 |
0.137 |
|
|
|
|
|
|
|
Weights
|
|
||||
|
Birth
year |
|
|||
Educational Attainment: |
1946
and before |
1947-58 |
1959-69 |
1970-81 |
|
Elementary or HS |
1.439189 |
1.369509 |
1.234375 |
0.950226 |
|
Some Post-Sec |
0.734545 |
0.796791 |
1.019465 |
1.087886 |
|
Univ Degree |
0.567857 |
0.722689 |
0.69145 |
0.890511 |
|
5. Short and Long Versions of the Questionnaire
As noted above, the full questionnaire was administered to a sub-sample of 600 respondents. For the other 405 respondents, a shorter version of the questionnaire was used. This was achieved by assigning a random number between 1 and 1000 to each respondent (RAN1). Respondents with values less than or equal to 650 were administered the full questionnaire, while those with values greater than 650 were administered the shorter version. (The exception to this rule is those respondents interviewed in the first two days of interviewing, March 1-2, 2000. On those days, only those respondents with values less than or equal to 500 were administered the long version of the questionnaire).
A second decision rule was also used to determine which version of the questionnaire would be administered. The length of time required to administer the first 16 questions on the survey, common to both the short and long versions of the questionnaire, was recorded as variable DRTS1. If this time exceeded 480 seconds (8 minutes) respondents were automatically administered the short version of the questionnaire. Of the 1,005 respondents, 56 exceeded the 8 minutes; of these 56, 31 would have been administered the long version of the questionnaire based on the value of RAN1. In short, the sub-sample of 600 administered the long version of the questionnaire was essentially randomly selected, but for a small group of 31 respondents that was excluded based on DRTS1.
6.
Items Employing Random Numbers
Some items on the survey employ random numbers to determine question wording or order. Most of these replicate the procedures used on the 1987 Charter Project. The items using random numbers are as follows:
Question wording: Q8 (version determined by RAN2)
Q30 (version determined by RAN5 - only for those answering “don’t know to Q29)
Question order: Q13-Q14 (order determined by RAN3)
Q23-Q24 (order determined by RAN4)
Q36-Q37 (order determined by RAN6)
7.
Additional Variables
The dataset contains several variables, primarily used for project management purposes, that are not described in the questionnaires. They are:
Interv: the number used to identify interviewers
Lang: the language in which the interview was conducted (1=English, 2=French)
Date: the date on which the interview took place
Dayweek: the day of the week on which the interview took place
Durats: the duration of the interview in seconds
Duratm: the duration of the interview in minutes
7.
Further Information
Further information on the study procedures can be obtained from Paul Howe, Research Director, Institute for Research on Public Policy, Montreal, Quebec, H3A 1T1. Phone: 514-985-2461. E-mail: phowe@irpp.org
[1] Some analysis carried out to date can be found in Joseph F. Fletcher and Paul Howe, “Canadian Attitudes toward the Charter and the Courts in Comparative Perspective” and “Supreme Court Cases and Court Support: The State of Canadian Public Opinion,” Choices, Vol. 6, no. 3 (May 2000).
[2] The CSS maintains a comprehensive list of
all populated exchanges across Canada and is updated on a regular basis. In
general, it works by randomly generating 4-digit suffixes for these
exchanges. These suffixes are generated
in proportion to the percent population of the individual exchanges, i.e.: a
90% populated exchange would experience twice as many ‘hits’ as a 45% populated
exchange. As each suffix is generated,
it is compared to the database of existing, known phone numbers. If it matches a listed phone number, it is
placed into the ‘valid number’ file. If
not, it is placed in the ‘orphan’ file.
The valid number file is used as the primary calling list. This is then supplemented with numbers from
the orphan list. As with the random generation above, numbers are chosen from
the orphan list in proportion to the percent population of the exchanges.