HomeAway

HomeAway

HomeAway

HomeAway

Using design to influence trust in vacation rental peer-to-peer marketplaces

HomeAway_Screen

OVERVIEW

Peer-to-peer marketplaces (AKA the sharing economy) involve individuals making private resources available to other individuals. Renting underutilized assets is not a new concept, but new companies and innovative technology have increased their accessibility.

HomeAway is a vacation rental peer-to-peer marketplace with more than 2,000,000 listings in 190 countries. They allow hosts to rent out their available spaces and - at the same time - provide travelers with unique accommodation options.

The User Experience team at HomeAway approached our class with the goal of exploring trust in the context of peer-to-peer marketplaces. This project was completed during the last semester of my graduate degree at the University of Texas at Austin School of Information.

TEAM

Erin Finley - UX Researcher

Monica Cho - UX Researcher

Diana Mendoza - UX Researcher

Sara Merrifield - UX Researcher

COURSE

Advanced Usability

PARTNER

HomeAway

DURATION

January 2019 - May 2019

Problem

Problem

Problem

Problem

What content and design elements influence trust in vacation rental peer-to-peer marketplaces?

What content and design elements influence trust in vacation rental peer-to-peer marketplaces?

What content and design elements influence trust in vacation rental peer-to-peer marketplaces?

What content and design elements influence trust in vacation rental peer-to-peer marketplaces?

The sharing economy has seen rapid growth in recent years, and it is not slowing down. Forbes projects that its global revenues will grow from $15 billion in 2014 to $335 billion in 2025. Peer-to-peer marketplaces are cost-effective, efficient, and even sustainable. However, they also involve a large amount of risk  - consumers must rely on brands and peers in the face of uncertainty. Trust is the currency of peer-to-peer marketplaces, and it is important to understand what influences it in the growing sharing economy.

Background

Background

Background

Background

To narrow our research into a semester-long project, we chose to investigate trust in vacation rentals specifically, rather than all peer-to-peer marketplaces. Our goal was to focus on the research strategy and develop a method that could eventually be applied to the sharing economy more broadly.

In peer-to-peer marketplaces, trust has 2 different targets: 1) trust in peers and 2) trust in brand. While we differentiated between these targets, we also recognized that they are not interdependent. We focused our research on trust in peers, but took into account how trust in brand influences trust in peers. Trust can also be considered from 2 different perspectives: 1) consumer perspective and 2) supplier perspective. We chose to focus entirely on the point of view of the consumer because we felt this would give us access to more research participants.

Background

Process

Process

Process

Determining the most effective way to approach this project was challenging. We ultimately decided it was best to let each step inform the next, allowing us to pivot as needed. We divided the semester into two phases, generative research and summative research, and listed potential research strategies for each phase. Rather than having a set research plan, we focused on the idea of diverging and converging throughout the semester. We thought of the generative phase as finding the right problem and the summative phase as finding the right solution.

Timeline

Generative Phase

Generative Phase

Generative Phase

Generative Phase

We began the generative research phase by conducting a literature review, heuristic evaluation, and competitive analysis in order to familiarize ourselves with the topic and with the market. The findings helped to inform the rest of the generative research. We created a screener, recruited participants, and conducted user interviews. After analyzing the data using an affinity diagram, we created a survey to generate quantitative data from a larger sample size to back up our qualitative data.

Literature Review

Literature Review

Literature Review

Literature Review

We selected 14 articles to review that explore online trust in general, trust in the sharing economy, and trust in specific peer-to-peer marketplaces (e.g. Airbnb, Uber, etc.). This generated a long list of content and design elements to research during the generative interviews. Some key findings that influenced our project approach are below.

Literature_Review_1
Literature_Review_2
Literature Review

Heuristic Evaluation

Heuristic Evaluation

Heuristic Evaluation

In order to assess the current HomeAway website, we performed a heuristic evaluation on the primary user flows using Nielson’s 10 Heuristics. We identified strengths and weaknesses with varying levels of severity. Examples are below.

Competitive Analysis

Competitive Analysis

Competitive Analysis

We evaluated 4 direct competitors and 8 indirect competitors, all peer-to-peer marketplaces but differing in the products and services offered. We examined the target market, cost, design, and perceived trustworthiness, and evaluated the strengths and weaknesses of each.

Competitive_Analysis_1

Generative User Interviews

Generative User Interviews

Generative User Interviews

Our main finding from the first half of the generative research was that trust is tricky to define and measure - everyone has a different perception of it. We concluded that we should be careful about using the word trust during the user interviews, since it can be hard to pinpoint.

Instead, our primary goal was to understand which content and design elements users consider while browsing vacation rentals. We wanted to get participants talking about the sharing economy in general and vacation rentals in specific. We did not mention trust (except during the retrospective section), but asked follow-up questions when participants brought up trust organically. We would uncover the why later.

We conducted one-hour, in-person, semi-structured user interviews with 8 participants. Each team member was the moderator during 2 interviews and the notetaker during 2 interviews.

Screening & Recruiting

Screening & Recruiting

Screening & Recruiting

In order to recruit the right participants, we created a Google Form screener and distributed it through the School of Information listserv. We included questions about location, age, gender, and vacation rental experience, with the goal of getting a mixed sample size. We received 19 responses, 16 of which met the search criteria, and 8 of which we were ultimately able to schedule.

Methodology

Methodology

Methodology

We created an interview script with 5 sections: 1) Warm Up, 2) Sharing Economy, 3) Vacation Rentals, 4) Website Exploration, and 5) Retrospective. The first half of the interview was devoted to sections 1-3. The second half was dedicated to section 4, the website exploration, which we further divided into 3 levels: 1) Branding, 2) Listings, and 3) Feature Prioritization. The interview was summed up with section 5, a short retrospective.

Interviewing

Interviewing

Interviewing

The first half of the interview (sections 1-3) was helpful in getting the participants to start thinking and talking about the sharing economy in general and vacation rentals in specific.

The second half of the interview (section 4) revealed the most insights - the feature prioritization exercise (level 3) was especially helpful. Before the interviews, we referred to the literature review to create a list of content and design elements (e.g. host name, photos, etc.) that may influence trust, and recorded them onto individual note cards. During the first half of each interview, the notetaker added to the notecards if the participant mentioned any additional elements. For example, photos was divided into default photo, quality of photos, and variety of photos during the first interview - a differentiation we did not think about.

We asked the participants talk aloud as they narrowed our list of 30 elements to 12 elements and sorted them from most important to least important. The top 12 elements were 1) rating, 2) variety of photos, 3) appropriate price, 4) host status, 5) content of reviews, 6) host response rate, 7) number of reviews, 8) neighborhood information, 9) host profile picture, 10) host references, 11) host membership duration, 12) quality of photos, and 13) host responses. It started to become clear that rating and reviews, photos, and host characteristics mattered most to participants.

User_Interviews_Photo_1
User_Interviews_Photo_2

Sara conducting the feature prioritization exercise during the website exploration section (I was the notetaker)

Analysis

Analysis

Analysis

To begin analyzing the interview data, we first recorded the qualitative interview notes into a comprehensive spreadsheet, amounting to 365 observations. We recorded an observation ID, the participant number, the interview segment, the notetaker, the observation, and an insight (if applicable).

Data Coding

Affinity Diagram

Affinity Diagram

Affinity Diagram

After finishing the spreadsheet, we completed an affinity diagram exercise as a team. We recorded the observations onto individual sticky notes and iteratively grouped and re-grouped them until clear themes emerged, then added these themes to the spreadsheet.

Affinity_Diagram_1
Affinity_Diagram_2
Data_Coding_2

Key Findings

Key Findings

Key Findings

The affinity diagram allowed us to see clear patterns between participants. Specifically, 6 key themes were mentioned repeatedly, and their various implications are shown below.

Key Finding
Key Finding
Key Finding
Key Finding
Key Finding
Key Finding

Survey

Survey

Survey

The interviews gave us an idea of which content and design elements users look at when browsing and choosing vacation rentals. Now we wanted to focus in on the idea of trust and validate our findings with a larger sample size, so we decided to conduct a survey. Since the key findings from the feature prioritization exercise and the affinity diagram varied slightly, we took both into account and created a revised list of 16 elements that were most important to the interview participants. Then, we designed a feature prioritization activity in Qualtrics.

We included demographic questions in the beginning of the survey. Participants who had never booked a vacation rental were kicked out of the survey. Our goal was to see if there was a correlation between age, gender, location, and/or vacation rental experience (e.g. less than 5 stays, 5-10 stays, and more than 10 stays) and the ranking of content and design elements.

We first asked participants to choose their top 5 elements that influence their trust when browsing and choosing a vacation rental. Then, we asked them to rank the chosen elements in order of importance. We noted that they should set aside personal preferences (e.g. price, location, and home style) and focus on features that contribute to trust when browsing and choosing a vacation rental.

Survey

Screenshot of the feature prioritization exercise

We received 87 responses that were valid (out of 110 responses total), 59 of which were female and 28 of which were male. The top 5 features were 1) content of reviews, 2) rating, 3) number of reviews, 4) quality of photos, and 5) variety of photos. Some important findings moving from the interviews to the survey are listed below.

  • Quality of photos ranked 12th during the qualitative interviews and ranked 4th in the survey - suggesting that it is probably related to trust.
  • Host characteristics was important to participants during the qualitative interviews but was less important to the larger sample size of survey participants.
  • We thought gender would play a more significant role in trust (e.g. women would tend to care more about safety), but the survey showed no major difference between genders.

Summative Phase

Summative Phase

Summative Phase

Summative Phase

During the summative research phase, we recruited participants and conducted a second round of user interviews, this time focused on a specific problem we found during the generative phase. We performed data analysis, created low-fidelity mockups and high-fidelity mockups, determined next steps and outlined future recommendations.

Summative User Interviews

Summative User Interviews

Summative User Interviews

The generative survey results indicted that ratings and reviews (including content of reviews, length of reviews, recency of reviews, etc.) influence trust most when browsing and booking vacation rentals. Because ratings and reviews are closely intertwined, we chose do an in-depth exploration of them together. Our goal was to uncover why users trust some ratings and reviews more than others, both in terms of content and design. We conducted short and focused summative interviews with 17 participants.

Recruiting

Recruiting

Recruiting

Our goal was to recruit a mix of genders in HomeAway’s target age group (20-39) with a range of vacation rental experience. This resulted in an interview population of 8 females and 9 males. 12% were ages 20-24, 70% were ages 25-29, and 18% were ages 30-34. 24% of had stayed in a vacation rental less than 5 times, 41% had stayed in a vacation rental between 5-10 times, and 35% had stayed in a vacation rental more than 10 times.

Methodology

We first conducted a brief analysis of how other websites display their ratings and reviews. We chose 3 websites to compare with the HomeAway experience - Amazon, Glossier, and TripAdvisor. We felt these websites offered a good mix of products and services, as well as varied in how they organized ratings and reviews.

The HomeAway team suggested that the majority of their users are on mobile, so we chose to conduct the interviews using the participants’ own mobile phones. At the end of the interview, we asked each participant whether they preferred browsing and booking a vacation rental on desktop or mobile. 70% preferred desktop, 12% preferred mobile, and 18% had no preference.

During the interviews, we told participants that we were investigating how ratings and reviews influence their trust of a product or service. We showed them the 4 websites and asked them to find and explore the ratings and reviews while talking aloud. Finally, we asked a series of follow-up questions to better understand which features they liked and did not like, and which website they trusted the most. 41% chose Amazon, 29% chose TripAdvisor, 24% chose Glossier, and 6% chose HomeAway. We recorded our findings in a comprehensive spreadsheet.

Summative Interview Notes

Partial screenshot of the raw interview data

Analysis

Analysis

Analysis

We analyzed the interview data similar to the generative interviews. First, we recorded the interview notes into a comprehensive spreadsheet, amounting to 510 observations. We recorded an observation ID, the participant number, name, gender, and age, the interviewer, the interview segment, and the observation. Then, Diana (our Google Sheets queen!) categorized and coded the observations by theme.

Summative Interview Analysis

Key Findings

Key Findings

Key Findings

The interview participants liked different aspects of how all four websites organized their ratings and reviews. These top 3 elements were mentioned the most.

Key Finding
Key Finding
Key Finding

Low-Fidelity Mockups

Low-Fidelity Mockups

Low-Fidelity Mockups

After analyzing the interview data, we each designed quick low-fidelity mockups to compare and contrast as a team. I incorporated many of the elements that participants liked during the summative interviews, as well as some elements participants mentioned during the generative interviews.

Low Fidelity Mockup
Low Fidelity Mockup

My low-fidelity work

My low-fidelity work

Low Fidelity Mockup

Diana's low-fidelity work

Low Fidelity Mockup

Monica's low-fidelity work

High-Fidelity Mockups

High-Fidelity Mockups

High-Fidelity Mockups

We combined our ideas to create a high-fidelity mockup. We focused on 3 areas: 1) adding key metrics, 2) designing search, sort, and filter capabilities, and 3) implementing a visual hierarchy.

We combined our ideas to create a high-fidelity mockup. We focused on 3 areas: 1) adding key metrics, 2) designing search, sort, and filter capabilities, and 3) implementing a visual hierarchy in the actual reviews.

High_Fidelity_Mockup_2

Redesigned Website

High_Fidelity_Mockup_0

Current Website

Key Metrics

Key Metrics

  • Most participants wanted to know what the “Excellent!” tag meant, so we added an information popup.
  • We added a breakdown of the rating distribution, each of which can be used as a filter.
  • Most participants said that the “rating by feature” within the reviews was hard to read, so we redesigned it as a star system and added an overall view at the top.
  • We highlighted the most recent positive and negative reviews at the top. We decided to base this on recency so it was clear how the reviews were chosen.
High_Fidelity_Mockup_5

Search, Filter, and Sort

  • About half the participants preferred a search bar to find reviews that mention a keyword (like “walkable”), while half of the participants preferred predefined characteristic tags. We included both as options - it would need to be studied further in future usability tests.
  • Three types of filtering that participants liked were rating, date, and reviews with user-generated photos.
  • Participants wanted the option to sort by recency and by low-high and high-low rating.
High_Fidelity_Mockup_2

Visual Hierarchy

  • Participants like seeing reviewer demographic information because they value reviewers options who are similar to them, so we added age range as on option.
  • Most participants thought that reviews should have a view more/view less option, so they could easily scroll past reviews they did not want to read. We showed the most pertinent information, including rating, demographics, and bottom line, in the collapsed review.
  • In the expanded review, we show the “rating details” breakdown, the full review, the owner responses, and we added user-generated photos.
High_Fidelity_Mockup_6

Next Steps

Next Steps

Next Steps

We did not get the chance to gauge the feasibility of our design recommendations. For example, is there a reason HomeAway does not allow user-generated photos? We also did not get the chance to conduct usability tests on our design recommendations. Both of these tasks would be good next steps in this project.

Future Recommendations

Future Recommendations

Future Recommendations

We chose to do a deep-dive study on ratings and reviews in order to fit the project into the semester. There are a number of other findings from the generative phase that could be studied as well. Examples of which are listed below.

  • What is a Premier Partner and how do hosts become one? There is currently help text provided, but exploring alternate copy is recommended.
  • Should HomeAway encourage hosts to take high-quality photos of every room?
  • How can fees be presented transparently? They should not be hidden in modals.
  • Users equate host responsiveness with quality experiences. Should HomeAway provide host response rates?
  • How can HomeAway encourage hosts to add a default photo that is representative of the home? This will aid users as they browse listings and determine which ones to delve into further. Users did not like default photos of amenities or of the neighborhood.
Presentation

Our class had the opportunity to present our projects at HomeAway at the end of the semester. Sara and Alyssa facilitated the presentations which were live streamed to a conference room with other product team members.

Conclusion

Conclusion

Conclusion

Conclusion

Conclusion

This was one of the most challenging and fun projects I worked on during my graduate degree. We are lucky to have had the opportunity to contribute to a real-world project and present our work to HomeAway.

Lessons Learned

Lessons Learned

Lessons Learned

Lessons Learned

Lessons Learned

  • We scheduled the generative interviews too close together, without enough time to debrief in between. We would leave at least 30 minutes - 1 hour in between next time.

  • We analyzed browsing and booking habits on desktop during the generative interviews and on mobile during the summative interviews. We should have been more consistent.

  • We should have randomized the feature prioritization in the Qualtrics survey to eliminate bias. We noticed the most popular features selected were also near the top of the list.

More Projects

iCareWeb Design

SproutResearch & iOS App Design

PatchAndroid App Design

Travel TexasWeb Redesign