This document traces how we designed an events discovery page for almaghrib.org using Claude Code. Each step records what went in, what came out, and what it fed into next. Every artifact is linked.

The workflow: you make creative and UX decisions, Claude handles implementation. Not "AI built this" — a collaboration where human expertise drives the design.

Step 1

Extracting 757 Lines of Brand DNA from the Live Site

Sunday, March 16, 2026

Input

Live almaghrib.org website

Output

757-line design system audit — exact CSS values for typography, colors, components, spacing, footer specification

Design system audit →

Before building anything, we needed the exact design language of almaghrib.org — not an approximation, but the actual computed values that define the brand.

Claude navigated the live site, read computed styles from every element, and produced a 757-line reference document: 4 font families, 40+ color values organized by role, 11 button variants with exact sizing, 4 card types, the complete footer specification, and animation timing curves.

This became the single source of truth. Every design decision that followed referenced it, and the automated brand-compliance review in Step 7 audited the prototypes line-by-line against it.

Designer takeaway: Minutes to extract what takes days of manual inspection. You start with a complete brand spec, not a mood board.

12 Sections
4 Font Families
40+ Color Values
11 Button Variants
Step 2

Understanding Who Uses This Page

Sunday, March 16, 2026

Input

Existing site analysis + event discovery use case

Output

5 personas with journey maps, conversion funnel, mobile-first strategy, accessibility plan

User research → UX strategy →

A first-time visitor browsing casually needs something different from a returning student checking dates in her city. Before choosing a visual direction, we mapped who uses this page and what they need.

Claude launched a research sub-agent that produced five personas specific to AlMaghrib's audience:

  • Omar (Newcomer) — First visit. Needs clear value proposition and social proof.
  • Fatima (Parent) — Registering kids. Needs schedule clarity and logistics.
  • Amira (Regular) — Returning student. Wants to filter by city and find new courses fast.
  • Yusuf (Traveler) — Comparing events across cities, weighing dates against travel cost.
  • Sarah (Browser) — Found AlMaghrib on social media. Exploring, not committed yet.

Each persona came with a journey map. The UX strategy extended this into a conversion funnel, mobile-first approach, and accessibility plan. These directly shaped the design briefs: Direction A for Omar and Fatima, B for Amira and Yusuf, C for Sarah.

Designer takeaway: Research artifacts that would take days of workshops, generated in minutes. You review and refine — not create from scratch.

5 Personas
19 Cities
21 Events
Mobile First Strategy
Step 3

Real Data, Not Placeholder Content

Sunday, March 16, 2026

Input

AlMaghrib API (api2.almaghrib.org/v1/wp/mega-menu-events)

Output

21 onsite events with real pricing, instructor names, city/date data, CDN images

Build spec → Data schema →

Placeholder content creates a gap between what stakeholders see and what the product will feel like. We eliminated that by pulling live data from AlMaghrib's public API.

Claude fetched the API response, parsed it, and extracted 21 events across 19 cities and 2 continents: titles, instructor names and photos, dates, pricing in three currencies (USD, CAD, GBP), poster images, background images, and registration URLs.

This data was embedded directly into each prototype. Stakeholders see their own events with real prices and real photos — so feedback becomes specific ("Journey Through Jannah looks cropped") instead of hypothetical.

A build spec defined the shared page structure for all three directions, and a data schema documented every API field and its display mapping.

Designer takeaway: Real data in prototypes from day one. Stakeholder feedback is concrete and actionable when they see their own content.

21 Events
2 Continents
19 Cities
3 Currencies
Step 4

Three Visual Directions from One Brief

Sunday, March 16, 2026

Input

Design system + user research + build spec

Output

3 design briefs — A (Community Hub), B (Clean Directory), C (Immersive Showcase)

Design briefs →

Rather than commit to one direction upfront, we defined three distinct visual approaches — each with a different personality, layout, and target persona.

Direction Personality Layout Target Persona
A: Community Hub Warm, people-first Horizontal cards, instructor avatars prominent, 2-column grid Omar, Fatima (newcomers, parents)
B: Clean Directory Information-dense, scannable List/table layout, small thumbnails, sort controls Amira, Yusuf (regulars, planners)
C: Immersive Showcase Visual, cinematic Large image-dominant cards, dark theme, hover reveals Sarah (browsers, explorers)

Each brief mapped the design system values from Step 1 to concrete decisions: typography, color emphasis, card anatomy, interaction patterns, and hover states. Specific enough to build from without ambiguity.

Designer takeaway: Describe the mood, get a full design brief. Three directions explored in the time it takes to set up one Figma artboard.

3 Directions
5 Personas Mapped
4 Font Families
11 Button Variants
Step 5

From Brief to Working Prototype in Minutes

Sunday, March 16, 2026

Input

Design briefs + event data + design system

Output

3 fully interactive HTML prototypes with real data, filtering, responsive layout

Each design brief became a self-contained HTML file. No build tools, no dependencies. Open it in a browser and it works — filtering, responsive layout, scroll animations, real data.

In a traditional workflow, briefs become static Figma frames. Here, the brief became a working page. Click "North America" and see 17 events filter. Resize the window and watch the layout adapt. Tap a card for instructor details and pricing.

All three prototypes replicate the live almaghrib.org footer — correct logo, verified URLs, matching typography. The constraint was deliberate: single-file HTML. Send it as an email attachment, open in Chrome, done.

Designer takeaway: Working prototypes, not static frames. The review conversation shifts from "imagine this would scroll" to "scroll it and tell me what you think."

3 HTML Files
~65KB Each File
21 Real Events
0 Build Steps
View Direction A Direction B (Dropped) View Direction C All three prototypes are interactive. Filter by continent, click cards, resize your browser.
Direction B is included for reference — it was dropped in Step 6.
Step 6

Human Judgment — Selecting the Right Direction

Sunday, March 16, 2026

Input

3 built prototypes reviewed in browser

Output

Direction B dropped. Directions A and C proceed with specific feedback notes.

Decisions →

Claude built three options. Kamran made the judgment calls — clicking through filters, scrolling on desktop and mobile, comparing visual quality:

  • Direction B dropped: Too dense, poor visual quality. The information-heavy approach sacrificed the warmth an events page needs.
  • Direction C strongest: Cards answer "who, what, where, when" at a glance. Dark theme with large images creates impact. Hover reveals add depth without clutter.
  • Direction A needs work: Card sizing inconsistent on desktop. "Journey Through Jannah" broken on desktop. Price too prominent — should move to expanded view only.

The key decision: date, instructor, topic, and location are primary. Price is secondary. This shaped every iteration in Step 7.

This step took minutes — but without a human eye for quality, Claude would iterate equally on all three directions, polishing one that should have been dropped.

Designer takeaway: Your eye for quality drives the direction. Claude executes. Without that judgment call, 20 iteration cycles would have been wasted.

3 Reviewed
1 Dropped
2 Advancing
4 Feedback Items
Step 7

20 Automated Iteration Cycles + Brand Compliance Review

Sunday, March 16, 2026

Input

Direction A + Direction C mockups + specific design feedback

Output

10 iteration cycles per direction + 2 parallel design reviews + 21 brand-compliance fixes applied

Four phases of autonomous refinement, each doing work that would take a designer hours.

Iteration (10 cycles per direction). A design-iterator agent ran a screenshot–analyze–improve loop: take a screenshot, identify the single most impactful visual improvement, implement it, verify with another screenshot. Direction A: standardized card images, removed visual clutter, improved text hierarchy, fixed a scroll animation bug. Direction C: refined card gradients for legibility, added purple glow to active filters, strengthened the hero overlay.

Brand compliance (2 agents, parallel). Two review agents compared each prototype's code against the 757-line design system from Step 1. Direction A received 9 fixes, Direction C received 12 — correcting typography, spacing, and color values to match the live site exactly.

Footer verification. Claude navigated to the live site, extracted the correct logo URL, all navigation links, social media URLs, and copyright text. Both prototypes updated with verified data.

Designer takeaway: 20 screenshot-and-fix cycles plus automated brand auditing — quality assurance that takes hours, compressed into minutes.

20 Iteration Cycles
2 Parallel Reviews
21 Fixes Applied
757 Lines Audited Against
View Final Direction A View Final Direction C Both prototypes are interactive. Filter by continent and city, click cards for details, and resize your browser to see responsive behavior.

Where This Page Fits in the User Journey

The events page sits in the middle of a larger flow. A visitor clicks "In Person" in the almaghrib.org mega menu and lands here. From here, they select an event and continue to registration.

Home Page
Mega Menu
Events Page
Registration

These prototypes explore what that destination could become — from a basic list to an immersive discovery experience with filtering, instructor profiles, and responsive cards. Visit almaghrib.org to see the current experience.

How Claude Code Works

Claude Code is an AI coding assistant that runs in your terminal. It reads files, writes code, browses the web, and takes screenshots. Its key capability: it can launch autonomous agents that work independently and in parallel — like a lead developer delegating to specialists.

Four tools were used in this project:

Sub-agent
An autonomous Claude instance given a specific task. It works independently and returns a summary. The main Claude orchestrates multiple agents in parallel.
Design-iterator agent
Screenshots a page, identifies the most impactful visual fix, implements it, and verifies. Repeats for a set number of cycles — autonomous tweaking and checking.
Design-review agent
Compares code against a design spec and produces a report: what matches, what deviates, and the exact fix for each deviation. Automated QA for brand compliance.
Browser automation
Lets Claude control Chrome — navigate pages, take screenshots, read content. Used to extract the design system (Step 1), verify footer links (Step 7), and screenshot during iteration.

The Full Pipeline

Design System Extraction 757 lines — 12 sections, 4 font families, 40+ colors, 11 button variants — Sun, Mar 16
User Research 5 personas with journey maps, conversion funnel, mobile-first strategy — Sun, Mar 16
API Data Pull 21 events, 19 cities, 2 continents, 3 currencies — Sun, Mar 16
Design Briefs 3 visual directions — Community Hub, Clean Directory, Immersive Showcase — Sun, Mar 16
Prototype Build 3 interactive HTML files, ~65KB each, 21 real events, zero build steps — Sun, Mar 16
Direction Selection B dropped, A and C advance with specific feedback — Sun, Mar 16
Iteration + Review 20 iteration cycles, 2 parallel reviews, 21 brand-compliance fixes — Sun, Mar 16

One working session. Each step's output fed the next. Human review directed the iteration, automated review enforced brand compliance, and the result is two polished interactive prototypes ready for stakeholder review.