Image by SIMON LEE
  • antoniodellomo

Designing Experiments

Today, I would like to share an assignment from the Interaction Design specialization’s second course “Design Principles” on Coursera. But before diving into the project, I want to emphasize the importance of MOOCs (Massive Open Online Courses). With the rapid pace of technological changes and the current job market, constant learning is the most pressing need of the day. And MOOCs are adequately equipped to address and serve it.


Said so, let's have the project overview: In this assignment, you will be giving advice to a team that is creating a new website. The design team has created the first prototype of a website. Before fully implemented, they wanted to be able to get usability feedback on this early prototype. They decide to bring several participants into the lab.



1. The team asks you whether they should videotape these sessions. What is your recommendation and why? “Yes, I think is really helpful to capture all the feedback during the session without getting distractions from taking notes. Videotaping is a great way to go back and review the session as many times as needed. Good start.

2. The first participant arrives. The facilitator briefs them by explaining, “I’d like to show you a new design that I’ve created. I’d like to see how well you perform with this design.” What are two problems with this introduction? “On the contrary that doesn’t sound like a great introduction because by saying that the facilitator exposes the participant to pressure. In my opinion, this is the main issue.

  • Since they are discussing a newly created product, the participant can feel obliged to say positive things about it, people generally don’t want to say bad things to a person's face especially regards a new design.

  • If a new product doesn’t perform well participants will feel like it’s their fault and not due to some technical issue.

3. Rewrite this introduction to avoid those two problems. “Thank you for participating! Today I am pleased to show you this design. We really hope we can make our product better for people like you. There are no right or wrong answers to any of the questions I'm asking in this study - we're simply interested in understanding the way people interact with the product. I will ask you to show me how you do things using the product and ask you questions to better understand what you do. We will record a little video of you so that I can go back and review things later and make sure we get everything right. We won't use your name in connection with the recordings or the results. The videotapes will only be used internally and never shared anywhere with anyone. We look forward to hearing your stories and experiences. Thanks for your time.

4. The experimenter continues, “I’d like to get a sense of what you’re thinking about as you go through this site. As you go through the following tasks, please think aloud. Whatever’s on your mind, share it vocally.” If the experimenter uses this think-aloud protocol, what should they not do? “Thinking aloud may seem unnatural and distracting to some participants since it may be very unusual for them because under normal circumstances they rarely comment on what they do.

5. Usability feedback in hand, the development team creates two alternative home pages for the course. They want to see which one encourages more users to sign up. If they compare these two alternatives in a controlled experiment, what is the null hypothesis? “When there is no difference in sign up rate/numbers between the two alternative home pages.

6. One team member suggests that all participants first see one design, and then all participants see the other design. What is the problem with this approach? “In my opinion fatigue is a potential drawback of using a within-subject design. Participants may become exhausted, bored, or simply uninterested after taking part in multiple treatments or tests.

7. The team agrees with you. The developers propose a between-subjects design. Participants who sign up in the AM will be assigned the first condition, those that sign up in the PM will be assigned to the second. What is the problem with this approach? “There can be disadvantages also between-subjects designs because they can be complex and often require a large number of participants to generate any useful and analyzable data.”

8. What would you propose that the development team do instead? “The issue is that the above examples depend on participant habits/actions, therefore a good way to avoid that can be a random assignment. Randomized controlled trials are often the most effective and successful.”

9. One hundred participants are exposed to each condition. In Design A 36 participants sign up. In Design B 24 participants sign up. To help you get started, the expected sign-up rate is 30% ((36+24)/200). You can also refer to critical values for the chi-squared variable here. “Df = 1 as there are only 2 outcomes, sign up or don’t sign up The expected sign up rate is 30%, so the expected none sign up rate is 70% The x-value = (36-30)2 /30 + (24-30)2/30 + (64-70)2/70 + (76-70)2/70 = 3.42857 So the p-value = 0.06 approx based on the chi-square table Since the probability of this result happening is 6%, which is more than a 5% significance level, we can accept (cannot reject) the null hypothesis that there is no difference in sign-up rate between the two home pages.”

10. Imagine that instead, 50 participants were exposed to each condition. In Design A, 18 sign up; in Design B, 12 sign up. The sign-up ratios are the same as in the previous question. Would the p-value increase, decrease, or stay the same, and why? “The p-value would increase as the sample size decreases.”




Photo by David Travis on Unsplash