THIS INFORMATION IS FOR INTERNAL USE ONLY. PLEASE CONTACT THE APPROPRIATE OFFICE FOR GENERAL SUPPORT.

Survey and Evaluation Design Tips

Evaluations and surveys are meant to be quick but meaningful. Most questions have responses options already available for the participant to choose. This makes it easier for the participant to complete the evaluation/survey, which increases your chances of getting good responses and decreases the amount of time required later for data analysis.

Survey/evaluation questions should be:
  • Connected to the purpose of the survey
  • Clear and succinct
  • Relevant to the participant
  • Useful for making change
  • Realistic for the participant to answer
They should not:
  • Request information that is already on available (e.g., age, gender, race/ethnicity, major, satisfaction with overall college experience, willingness to recommend CWRU to others, etc.)
  • Be redundant of other questions
  • Be a combination of two questions in one (e.g., I am satisfied with my college degree and hopeful that I will graduate on time)
  • Ask questions you don't want to know the answers to or aren't willing to commit the time to analyzing and reporting on.
Keep it short and simple

There should be no more than 3-4 questions with comment boxes and most of these questions should be located at the end of the assessment. If you have more 3-4 comment boxes, you may need to consider other methodologies (e.g., interviews). Before doing that, try replacing the comment boxes with a list of responses the participants might provide and then add an "Other (please specify):" option at the end of the list, if needed.

Be specific

Some assessments are designed around themes (e.g. a set of programs, services, facilities, opinions, behaviors, attitudes, interests). These assessments typically start with a question related to the theme (e.g., list of programs they have participated in) and then use skip patterns to direct the participant to more specific questions about the areas they marked.

Evaluating a process

Some assessments are designed to evaluate a particular process (e.g., student conduct process, Title IX process, budget request process) or experience (event).  These assessments are typically easier for participants to complete if the questions follow the order of the person's experience from start to finish.

Designing your own questions

If you decide to design your own questions, try using the process below:

  1. Brainstorm all of the statements you would or would not want the participant to say about a program, service, facility, etc. For example, we'd never want a student to feel isolated on campus. We could measure the extent to which students feel isolated by using the statement, "I feel connected to others on campus" or "I feel isolated from others on campus" with either an agreement or frequency scale.
  2. Review the questions, keep the ones that are most important and discard those that are less important
  3. Attempt to group the questions together.
  4. Work on finding the best response options.
  5. Create headings for the groups.
  6. Finalize the order of the sections. (If done well, the order of the sections can serve as the outline for presentations, handouts, and reports)
Reverse coding

Reverse coding can be useful in catching people who stop reading questions and mark the same response for every question. Reverse coding is simply changing the context of a few questions from positive to negative or vice versa. For example, if all of the questions are written from a positive aspect, reverse coding means changing "I have friends on campus" to "I don't have any friends on campus." Doing this occasionally makes the reader pay more attention and lets you know how seriously they are taking the questionnaire.

Sensitive information

Sensitive information includes, but is not limited to, Tier 2 and Tier 3 data (as defined by University Technology) as well as information about conduct, discrimination, illegal activities, personal health/illness, relationships, sexual activity, sexual orientation, stigmatized behaviors/opinions, and traumatic experiences. Careful thought should be given on whether or not these questions should be included in the study. If they are included, additional there needs to be additional consideration regarding how the participants' information will be protected, stored, and shared. It may be necessary and beneficial to request and obtain approval from the CWRU Institutional Review Board. Contact your assessment representative for more information.

Start with the end in mind

It's best to think about how you will analyze and use the data when you are choosing or designing the assessment instrument. It's never fun to finish collecting data only to find out that the information isn't useful or would be more useful had the question or scale been word differently. One of the easiest ways to figure out if the information will be useful is to assign high and low percentages to each question to it to see what it might look like in the end and determine whether or not the question still makes sense. For example:

  • 30% of students said they enjoy studying together
  • 89% of students said they enjoy studying together
Which response options?

Choosing the right response options can make or break your success in assessment. It's very important that the response options match the stem of the question and will provide the information you need to know. It can be disappointing to finish collecting data only to find out that the options were wrong and therefore the results don't mean much. An easy way to tell if the scale matches the question is to imagine what the result for that question might look like in a report. Here's an example of a stem and scale that match and ones that don't:

  • Match: 40% of students agreed that they want to know more about other cultures.
  • Don't match: 40% of students are satisfied that they want to know more about other cultures.

Tip: Response options are often referred to as scales.

Question Type Suggested Stem Scale Type Suggested Response Options
Quality How would you rate the following… Likert Very Poor, Poor, Fair, Good, Very Good*
Satisfaction How would you rate your satisfaction with the following: Likert Very Dissatisfied, Moderately Dissatisfied, Neutral, Moderately Satisfied, Very Satisfied*
Agreement To what extent do you agree or disagree with the following: Likert Strongly Disagree, Moderately Disagree, Neutral, Moderately Agree, Strongly Agree*
Importance To what extent do you feel the following are important: Incremental Not Important, Slightly Important, Moderately Important, Very Important
Frequency Over the past (e.g., few weeks), how often have you done the following: Incremental Never, Occasionally, Frequently or Never, Rarely, Sometimes, Often, Very Often
Frequency How many times per (e.g., month) do you do the following: Incremental 0, 1-2 times, 3-4 times, 5 or more times

* Note: The fair and neutral option may be removed if it doesn't fit with the question being asked or if the study coordinator wants to more clarity on the participants' true feelings. Removing the fair/neutral category may also impact the percentages for the other response options.

Be consistent

Use the same scale as much as possible within the instrument (within reason).  This makes it easier for the reader to move through the questions, which decreases the survey response time and increases the overall response rate.  It also makes it easier to analyze the data and design charts and graphs later on.

Try to use the same scales each time data is collected. This makes it possible to compare results over time. If results will be compared to other institutions, try to use the same scales that the other institutions are using. Review your scales occasionally and make adjustments if they do not fit the question being asked or if the scale needs to be more balanced. If you have been using a bad scale for years, go ahead and make a change. This will also reduce participant frustration and increase the possibility for more responses and the quality of those responses.

When possible, organize the scales so all of the negative responses and positive responses to scales are on the same side.  This helps the participant to move through the questions quickly and reduces the chances of the participant making a mistake. It will make it data analysis easier as well. Some researchers like to switch the scale so participants have to pay more attention, however, this also increases the possibility of mistakes.

< Back to Tools and Training