Gig Economy
Secondary Research
Secondary research was conducted by reviewing user feedback on app stores and TaskRabbit, as well as analyzing discussions in public forums. Features of popular freelancing platforms were examined to identify common issues and effective practices in freelancer evaluation and support.
Primary Research
Primary research was conducted through six semi-structured interviews—three with freelancers and three with clients—to gather broad and creative perspectives. Key goals were identified to explore perceptions of fairness, effectiveness, and transparency in current freelancer evaluation systems, as well as to uncover challenges and expectations from both sides.
Problem overview
TaskRabbite lists taskers by price, rating, and completed tasks.
Many taskers are similar, making selection time-consuming.
Customers prioritize different qualities, such as:
Cleanliness
Speed
Communication and politeness
etc
Design Solution:
The new design introduces categorized features for taskers, based on the most frequently selected qualities from previous customer ratings.
before
After
Problem overview
While TaskRabbite allows taskers to find customers, images alone may not fully showcase their skills.
Some comments only show ratings, with no context linking them to the images or the specific service. This can lead to misinterpretation, especially when a tasker performs well but receives a lower rating due to high customer expectations.
Design Solution:
The new design introduces videos alongside photos to better demonstrate a tasker's abilities.
Each image and video is now linked to a specific comment and rating (if applicable).
before
After
Problem overview
The 5-star rating system lacks sufficient detail, with many users leaving comment sections empty, especially for lower ratings, leading to confusion.
Some customers give low ratings due to personal problems with the tasker.
Design Solution:
The new rating system requires raters to select qualities based on their given star (positive qualities for 5 stars, negative and positive for 4-0 stars), ensuring a qualitative comment.
Low ratings require approval of a video uploaded by the tasker from the service process for further verification, preventing unsubstantiated bad ratings.
What I learned
In this group project, I strengthened both UX craft and collaboration: I redesigned the ratings/reviews to be fair and scannable, set up a small Figma component library, and turned research into clear design choices the team could align on. I gathered secondary data from Google Play/App Store reviews, public forums, and academic papers, then synthesized it in a Miro affinity map alongside interview notes to surface patterns and priorities. I explored four concepts, wireframed each, ran quick early concept checks, and iterated three rounds to arrive at a simple, accessible solution. A constraint to avoid reusing the same UI pushed me to think beyond familiar patterns while keeping navigation straightforward. Along the way I sharpened information architecture, interaction design, wireframing→hi-fi prototyping (Figma), design systems (components/variants), UX writing, research synthesis, prioritization, documentation, and cross-functional handoff.