I designed and shipped a guided AI sourcing workflow that helps brands turn early ideas into supplier ready briefs and outreach. The goal was simple: get suppliers to respond faster and help more sourcing conversations turn into real orders.
230+ product briefs created
140 invoices generated; ~50% conversion to paid

Product brief interations
Design to support uncertainty, not punish it. The intake experience was designed to meet users where they were in their thinking, with simple examples and optional guidance.
30% of users who saw the input screen progressed to generating a product brief. This was a strong signal given that sourcing is not a frequent, everyday task and many users were exploring the feature out of curiosity at launch.

Step 1 - Input screen
Product brief as the core artifact. User input becomes a structured product brief that captures specs plus commercial constraints. Because it directly impacts response rates and sourcing outcomes, it was designed as the primary workspace, always visible, editable, and supported by AI guidance.
55% of users who created a product brief moved on to factory recs. This indicated that the experience successfully carried momentum forward.

Step 2 - AI Product brief canvas
Factory matches grounded in the brief. This is what separates it from a generic chat experience. The system protects the user effort by immediately translating that brief into an actionable factory shortlist.
~80% of users who saw factory recs went on to contact suppliers. This validated that the final step removed hesitation at the point of outreach and translated preparation into action.

Step 3 - Factory recommendations
Context. Pietra offers a large network of vetted factories, which attracts many long tail users. Many of these brands reach out to suppliers while they are still in the ideation phase and figuring out what to make.
Problem. Those first messages often miss the specs and constraints suppliers need to act on. Suppliers either do not respond, or the conversation turns into endless follow ups to gather basic info. Either way, sourcing stalls and fewer conversations move to quotes, samples, and invoices.
Who this affects. Long-tail users in the ideation stage. Suppliers who decide whether to respond, quote, and move to sampling. Pietra, because marketplace conversion drives revenue.



Without clear requirements to act on, the supplier had to keep asking basics and eventually dropped the conversation.
Process. I reviewed past brand-supplier conversations and looked for patterns in what led to quick quotes versus stalled threads. I also ran reviews with the sourcing team to validate what suppliers actually need.
Insight. Suppliers need clear specs to quote confidently. Long-tails users do not know the right sourcing vocabulary or what details matter. When requirements were captured in a product brief, the conversation had momentum and suppliers could quote sooner.
kith inspired hats
pre-wash treatment
cotton twill
fashionable urban design
500 MOQ
Product brief for Urban comfort twill cap


General Information
Category
Caps, Hats, Headwear
Target Market
Urban fashion enthusiasts
Material Specifications
Fabric Material
100% Combed-Cotton Twill
Panel Setup
Six evenly shaped sections
Pre-wash
Softness
Production & Quality
Timeline
30-45 days
Sampling Costs
$75 per sample
MOQ
500 Units
A product brief is the bridge between ideation and manufacturing. It acts as the source of truth for the request, what it is, how it should be made, and what the brand needs on MOQ, sample cost, and timing.
Leveraging AI. AI has become increasingly good at turning messy, unstructured input into clear, structured information.We used it as a guide that helps brands flesh out early ideas, learn what suppliers need, and turn that input into artifacts suppliers can act on. The goal was not automation for its own sake. It was education and speed, with transparency and user control.
Grounded in Pietra’s sourcing knowledge. Pietra sits at the center of a trusted supplier network and has deep knowledge of what factories need to evaluate a project. Over time, this created thousands of data points across categories, materials, pricing patterns, minimum order quantities, and production constraints. We used that knowledge to guide what the agent asks, how it fills gaps, and what it recommends.
Sourcing agent framework
Proving the agent could do the job. This was our first true agentic product at Pietra. So before committing engineering and design cycles to a full end-to-end experience, we needed confidence in the engine itself. So we first built the workflow inside a sandbox environment.
Pressure-testing the system. I reviewed generated briefs with the sourcing team to pressure-test them against real supplier expectations. In parallel, I partnered with PM to refine prompts. With engineering, I mapped model constraints around latency, regeneration behavior, and output structuring.
Key insights that shaped the experience. These insights directly shaped the product architecture.
Latency is a product constraint, not just a technical detail
The system was new, and model latency could not be eliminated overnight. Design would need to protect time-to-value while still improving clarity.Input quality directly drives output quality
Better inputs produced stronger briefs and more relevant supplier matches. Design would need to encourage stronger input so the brief is closer to complete.Structured output and seamless refinement drive progress
Large, dense blocks of text were difficult to navigate and refine inside a chat loop. Editing required excessive scrolling, re-prompting entire sections, and waiting for the model to regenerate content.
Observation. Input quality directly shaped brief completeness; and brief completeness directly shaped factory matching quality. Better input ➝ More complete brief ➝ Accurate supplier recs
Behavioral constraint. Users controlled the input. Some provided rich detail. Others typed a single word like "t-shirts."
Vague input ➝ More AI guesswork ➝ More cleanup ➝ Delayed time-to-value ➝ Higher drop‑off risk
System constraint. Generating a brief already required ~1–2 minutes. Weak input compounded that delay.
Goal. Encourage stronger input so the brief is closer to complete and users don't abandon the flow.
Iteration 1
Auto enhance on send. Leadership pushed for a lightweight way to strengthen input before brief generation. After a user hits send, the system takes their input and generates 2-3 improved prompt options that try to fill gaps quickly.
The double latency loop. Auto enhance added a second wait before users saw value. Users waited ~30 secs for prompt options, then another 1-2 minutes for the brief. The added delay introduced friction at the very start of the flow, and made the tool feel slow.
Decision. Do not force enhancement. Input strengthening must not block speed.
Iteration 2
Better inputs through guidance, not a forced detour. We still needed stronger inputs, but without adding more waiting before the brief.
Prompt templates. Short, copyable examples that show what a strong request looks like. This helped users move past one word inputs we saw early on, like cotton, t shirt, or hats.
Help me refine, optional. We kept enhancement, but made it user controlled. New founders could pull suggestions when they wanted them, experienced users could move forward without friction.
Why it worked. Users who needed support received it. Fast users were not slowed down.
Iteration 3
User behavior remains unpredictable. Even with templates and optional refinement, some users still submitted minimal or vague input.
Guided review as a fail safe. We designed recovery directly into the workflow. The agent guides the user section‑by‑section. It asks focused questions, suggests missing constraints, and makes it easy to confirm or edit, so users never feel stuck staring at a long document.
The brief needed a canvas. The team aligned early that long, supplier-ready briefs could not live as chat responses. They needed structure, hierarchy, and persistent visibility. I designed a side-by-side canvas where the brief stayed visible next to the conversation, while the agent guided users through each field step by step.
Iteration 1
Editing was still slow. Users either waited for the agent to ask the next question or manually typed instructions and waited for regeneration. Every change became a loop of type ➝ wait ➝ reread ➝ repeat.
Proposal & Pushback. I proposed making the brief interactive so users could act directly on the document instead of routing every micro decision through chat. The pushback was valid. This meant new interaction patterns and complex state handling, so we needed proof of usage before investing in heavier engineering.
So how do we improve refinement with minimal build? If we could not remove chat dependency, we could at least optimize for speed.
Brief canvas. Introduced Review this cues on fields where the agent made assumptions or needed user confirmation, so users could focus on the highest risk gaps first.
Lock and skip. Locking keeps confirmed fields fixed so the agent skips them, which removes repeat questions and keeps momentum.
Iteration 2, 3, …
Sequencing interactivity. Interactivity was intentionally phased based on impact and effort. Clarity first, targeted refinement later.
Visualize for ambiguous details. The sourcing team pointed out that some details are hard to validate in text. We used our existing image generation capabilities to visualize options using the full brief context, helping users confirm details and make decisions faster.
Inline actions. We added a field level menu with common questions we shaped with the sourcing team. Users could pick an option, ask from a preset list, or ask their own question, so decisions stayed fast and in context.
A new challenge post launch. As AI started helping brands identify more relevant factories, users were often reaching out to a larger number of suppliers than before. That was a win for reach, but it also multiplied the work of managing conversations.
Goal. Help users scale outreach while staying focused on decisions instead of coordination.
Extending automation into messaging. Brands could keep conversations manual, or deploy an agent to handle outreach and follow ups.
~90% of users who turned on automated messaging kept it enabled. This indicated that goal-based automation reduced overhead without compromising trust or control.
Goal driven automation. Users set what information to collect and what to report back, keeping the agent focused on outcomes.
Guardrails and recovery. Before deployment, we validated the brief and goal, surfaced missing info clearly, and offered quick fixes. This ensured the agent entered conversations with enough context to be effective.
Scannable progress and clear control. We summarized threads, tagged them by outcome, and gave users simple controls to edit goals, stop, or take over. Notifications only fired when human input was needed.
Sourcing AI was Pietra’s first successful AI-native workflow, proving that guided automation could meaningfully improve a complex, high-stakes process like sourcing. This integration marked the transition from an experimental AI feature to a foundational capability embedded in the product’s day-to-day workflows.
Building a sandbox made everything faster later. Before I could design the right experience, I needed to understand how the agent behaves in the real world. A lightweight sandbox made it easy to test edge cases, see failure patterns, and make faster decisions once we moved into product design.
Users bring intent, not prompting skill. Most users are not thinking in prompts. They are thinking in goals. The system has to help translate messy intent into something structured, and make it easy to recover when the input is unclear.
Trust comes from guardrails and control. Trust is earned when the system protects confirmed decisions, does not overwrite user intent, and makes it obvious what will happen next.

















