Case Study

AI Generated Activities @ Seesaw

Tagline

Reducing teacher prep time by turning static curriculum or topics into AI-generated, interactive activities.

Project Overview

Role

Product Designer

Timeline

3 months

Responsibilities

  • Research
  • Wireframing
  • Visual design
  • AI prototyping
  • Analysis of qualitative and quantitative feedback
  • Success metrics evaluation

Tools

  • Figma
  • Figma Make
  • ChatGPT
  • Lyssna
  • User interviews
  • Amplitude
  • Datadog

Project Goals

This project focused on reducing the time and friction required for teachers to use Seesaw alongside mandated curriculum resources, while preserving the platform’s core value: multimedia reflection, auto-graded assessments, and voice and translation scaffolding for pre-readers and multilingual learners.

User goals

  • Reduce the time required to create activities from either existing curriculum or a simple prompt.
  • Enable teachers to feel confident and in control of AI-generated content.
  • Support both required curriculum use and ad-hoc activity creation without introducing new mental models.

Business goals

  • Increase adoption of Seesaw for daily lesson preparation.
  • Reduce drop-off caused by manual or off-platform workflows.
  • Strengthen Seesaw’s position in practical, classroom-ready AI as investment in AI EdTech is increased by new budgeting in the US.

Project Design Principles

  1. Teachers stay in control. Always.

    • Build trust by allowing teachers to always opt out, skip a step, go back, or start from scratch.
    • Output is editable, optional, and transparent..
  2. Make the complex clear.

    • Break big tasks into small, manageable steps.
    • Use plain language, real examples, and helpful defaults.

Process

1) Research

I began by reviewing generative AI patterns and conducting a competitive analysis across more than 10 AI and EdTech tools to understand prevailing interaction models, levels of user control, and trust-building approaches in educational contexts.

Research conducted by our UX research partner confirmed two closely related, already-established needs. Teachers were required to use school-provided curriculum, typically locked in static formats such as PDFs, and they also struggled to create activities when starting from a blank canvas.Under constant time pressure, research findings showed that prompt-based generation was essential for quick starts, while upload-based generation needed to preserve the structure and instructional intent of existing materials. These insights directly informed how we approached supporting both workflows without increasing cognitive load.

User Quotes

Competitive analysis common patterns:

  • a large wall of different AI generative tools
  • specifc inputs with presets to help get teachers started
  • text-only based outputs
  • Generic text based quizzes
  • Chat Interface for student experience

Competitive Analysis

2) Wireframe & Prototype

I focused ideation on the ingestion experience, uploading a document and entering a prompt, because the structure of the AI-generated output was largely constrained by engineering feasibility and the project timeline. This allowed design effort to be concentrated where it could most meaningfully influence usability, trust, and task success.

I began with low to mid-fidelity wireframes and progressed to mid-fidelity designs to support Figma Make prototypes. This approach enabled rapid sharing, usability testing, and iteration on interaction patterns and flow logic before committing to high-fidelity visuals, ensuring design decisions were validated early and efficiently.

User Quotes

Updates based on usability test iterations.

Make Prototype

Figma Make prototype

3) Usability Testing

I tested the upload document flow using Figma Make and Lyssna before engineering investment. Because this was a new and unfamiliar experience for teachers, I intentionally spent most of my time validating and refining this part of the flow.

Usability Testing Insights

  • Teachers hesitated at a step labeled “Crop,” interpreting it as destructive editing.
  • Renaming the action to “Select” and later “Highlight” improved comprehension.
  • Added first-time user guidance including a short highlight animation, a “Text Added” badge, and an image counter to reinforce that image selection was the primary action.
  • Post-beta engagement data showed low interaction with this step, validating the decision to fully automate image selection from PDFs, which has since shipped.
User Quotes Make Prototype

Updates based on usability test iterations

4) Final UX for Beta Release

For the beta release, we explicitly prioritized learning over optimization. While we were confident this experience would reduce time spent creating activities, the wide variability of curriculum meant that understanding how teachers would prompt and adapt AI-generated activities was a critical learning goal.

  • Two distinct creation flows both produced a Seesaw-native, interactive, and auto-graded activity generated by AI.
  • A lightweight flow allowing teachers to start from a topic or prompt.
  • A structured flow allowing teachers to upload a document and replicate it as faithfully as possible as a Seesaw activity.
  • Prominent, persistent buttons across the top allowed teachers to switch between flows, supporting discoverability and enabling us to observe preference and behavior during beta.
Handoff File

Handoff File

Impact

Quantitative

  • ✅ About 35 minutes saved per activity. Teachers estimated activity creation now takes ~10 minutes instead of 30–45 minutes manually.
  • ✅ Strengthened Seesaw’s AI positioning in EdTech that aligns with new governemnt funding.
  • 🆕 Implemented new usability testing methods and documentation to quickly validate with AI prototyping and unmoderated usability tesing.

Customer Feedback

  • "Saves time, saves paper! I didn’t have to print out 8 pages for each module assessment, which is great."
    - Rebecca, 1st Grade Teacher
  • "I uploaded word problem, it made the question and the answer box. This took 2 minutes instead of 20-30 minutes."
    - Cheryl, Instructional Technology Specialist
  • "In 2-3 minutes I had a great activity for repeating patterns."
    - Christy, Kindergarten Teacher

What's Next

We collected ongoing feedback through in-product microsurveys and teacher interviews. With nearly 3,000 survey responses, this data showed: that Runner up being that prompt-based users were more sensitive to output length and structure than upload-based users, and that long, multi-page outputs were overwhelming when teachers explicitly requested short activities.

  • Top theme in feedback was images. Images were something that teachers were missing from the AI generated activity.
  • Runner up being that prompt-based users were more sensitive to output length and structure than upload-based users, and that long, multi-page outputs were overwhelming when teachers explicitly requested short activities.

These are now our next two projects and are in progress.

What I Learned

1) To manage scope and complexity, I created a feature exceptions log to track tradeoffs, constraints, and intentional omissions. This helped align the team quickly when we started reacting to feedback. We could see what was anticipated and what wasn't.

2) Framing expectations when it comes to AI generation is extremely important. Most users that had a mismatch in expectations vs outcome have yet to try the tool again.

2) Framing expectations when it comes to AI generation is extremely important in education apps. Most users that had a mismatch in expectations vs outcome have yet to try the tool again.

Next Case Study

Redesigning the Student Activity Experience