May 2026 · public post-mortem
Agents Day feedback summary
A sanitized public version of the feedback loop after Agents Day: what landed, what broke, and what should change before the next edition.
TLDR
Agents Day worked. The concept, audience, venue, food, sponsor interest, and grassroots energy all landed. The main gap was not the thesis, it was the operating system around the event.
The next edition should preserve the energy and curation, while making the event loop much clearer: what to build, how challenges work, how to submit, how demos are selected, how mentoring and judging work, and why people should stay until the end.
What clearly worked
- The framing had energy. The event felt native to the local AI/building community, not like a generic conference or lead-generation activation.
- The room quality was strong: builder-heavy, technically credible, but still mixed enough to feel open and culturally interesting.
- Sponsor interest was strong despite short notice. Sponsors with concrete challenges were able to mingle with participants and create useful conversations.
- The venue, food, coffee, visual identity, and photo coverage all helped the event feel more serious and memorable than the timeline should have allowed.
- The event created real community momentum. The group chat, after-event energy, and post-event sharing made it feel like something people wanted to continue.
- The product layer showed promise: the app was used, the workshop had a real audience, and the event created a useful go-to-market surface without feeling overly commercial.
What did not work
- The core event loop was not legible enough. Participants needed clearer guidance on tracks, sponsor challenges, demo selection, submissions, prizes, mentoring, judging, and what counted as winning.
- The distinction between sponsor challenges and the demo challenge was not explicit enough. These need to be separated in the rules, schedule, app flow, and announcements.
- The app flow had too much friction at submission time. Gating, check-in logic, and permissions should feel trustworthy and lightweight, not like extra hoops.
- Comms were too noisy. The next edition needs separate channels for participant announcements, open discussion, and organizer-only coordination.
- Submissions were weaker than the projects. Sponsors needed more structured signal: problem, demo, repo or evidence, sponsor relevance, and what the team would build next.
- Awards momentum was underbuilt. Some people left before the end, which means the closing arc did not create enough visible stakes or reason to stay.
- Mentoring and judging were too entangled. Helpful mentoring should not feel like the path to stage time or prize probability.
- Sponsor and partner activations need to sit inside the participant flow. Anything physically or narratively off to the side risks feeling isolated.
- The content layer competed with building. Talks need better timing, a separate area, or a sharper format so they do not interrupt mentoring and work sessions.
- Operations need more redundancy: backup internet, visible Wi-Fi details, water and drinks ownership, sponsor assets ready earlier, and a clearer media/photo delivery plan.
Potential paths and main takeaway
- Agents Day does not need reinvention. It needs a cleaner operating system.
- Do not scale the format until the core loop is clear end-to-end. The next edition should feel less improvised without losing the open, experimental energy.
- Run smaller demo-day style events between flagship editions to keep momentum warm and test format improvements with lower risk.
- Consider a hybrid model: async or virtual building for reach, followed by an IRL final with stronger demos, clearer judging, and sponsor-backed finalist support.
- A useful next format could be a lighter summer school, followed by a stronger September kickoff or multi-day agents week, if it helps distribution without diluting the identity.
This is a public summary. Names, company-specific feedback, private incidents, and internal operating details were deliberately removed.