← Back to Work

Senior Program Manager, Microsoft DevRel — internal program

AI Hack & Tell

62 submissions across 10 of 15 DevRel teams in a zero-to-one internal AI dogfooding program.

Timeframe: Program launch (Phase 1): July 1, 2025 – March 30, 2026

Context

In the months after generative AI hit mainstream developer tooling, every team inside Microsoft's Cloud Advocacy organization was experimenting — some with copilots, some with RAG, some with agentic workflows, some embedding LLMs into the dev surfaces they already owned. The work was real, but it was invisible. Engineers and advocates were shipping interesting prototypes, then quietly moving on without ever showing the rest of the org what they had learned.

DevRel's job is to translate product into developer-usable narrative. We could not do that for AI internally if we did not even know what each other was building.

Starting state

  • No internal forum for surfacing AI experimentation across the 15 DevRel teams.
  • Knowledge was siloed by team, region, and product surface area.
  • Strong individual builders, but no shared rubric for what "good" looked like with AI.
  • No recurring cadence for showcasing or recognizing AI work in front of org leadership.

Goals & success metrics

I designed AI Hack & Tell to serve three goals at once:

  • Surface the work. Get AI experiments out of one-off Slack threads and into a single, browsable archive.
  • Set the bar. Use prize tiers and judging criteria to signal what excellent AI dogfooding actually looks like inside DevRel.
  • Build the muscle. Establish a recurring cadence so AI experimentation became a normal part of how teams operated, not a one-time push.

Concrete metrics I tracked: number of submissions, number of teams represented (target: majority of the 15 DevRel teams), engagement with monthly evangelism touchpoints, and qualitative leadership recognition.

Scope & constraints

Scope: Org-wide, all 15 DevRel teams across Microsoft Cloud Advocacy.

Resourcing: I was the lead PM. There was no dedicated build team. I designed the program structure, submission workflow, prize tiers, communications cadence, and ran the recurring evangelism in town halls and standing meetings.

Time pressure: AI was moving weekly. The program had to ship fast enough to stay relevant and remain flexible enough to absorb new product surface area as it landed.

Non-negotiables: It had to feel like a celebration, not an audit. Submissions had to be low-friction. Recognition had to feel earned, not participation-trophy.

Approach

I treated AI Hack & Tell as a product, not an event. The deliverables were:

  • Submission workflow: A structured intake so each project had a consistent shape (problem, approach, demo, learnings).
  • Prize tiers: Signaled which axes mattered most — creativity, applicability, polish, and dogfooding.
  • Monthly cadence: A recurring rhythm of submissions, judging, and recognition paired with org-wide town hall callouts.
  • Communications engine: Repeating evangelism in leadership forums and team meetings to keep the program visible and submissions flowing.

The hardest part was not designing it on paper — it was sustaining the recurring drumbeat of evangelism that turned a "cool idea" into a program teams actually submitted to.

Decisions & tradeoffs

Decision 1: Platform choice — GitHub vs. Azure DevOps.

The hardest call was where to host the program. GitHub was the developer-native choice. That is where Microsoft's external developers live, where DevRel teams already work, and where the social mechanics (issues, discussions, stars) map naturally to a hack showcase. Azure DevOps was the safer Microsoft-internal option with tighter access control and existing organizational workflows.

I chose GitHub because the program was for DevRel — the team whose entire job is to meet developers on the platform they actually use. Picking ADO would have been internally convenient and externally tone-deaf.

The tradeoff was real: it took more setup work to get the access model right, and there were governance conversations that would not have been necessary in ADO.

Decision 2: Voting model — monthly category winners → quarterly overall winner.

The original design had monthly category winners with peer voting. Two months in, the data was clear: voting participation was low, and the cognitive load of "rank these submissions across multiple categories every month" was burning out the audience faster than it was driving engagement.

I shifted the model to a single quarterly overall winner, recognized live in a town hall by leadership. Submissions stayed monthly, so the cadence and visibility persisted, but the voting and evaluation moved to a quarterly, higher-stakes moment. Engagement on both sides — submitters and voters — went up because each touchpoint carried more weight.

The shift also unlocked a budget reallocation that made the prize itself meaningfully larger without asking for any new funding. The original monthly model spread the same total budget across three category winners every month: small prizes, distributed widely. Consolidating into one quarterly winner pooled three months of three-winner budget into a single prize, making the win materially bigger as a tangible artifact without changing the program's bottom line. The motivator inside Cloud Advocacy was always going to be leadership recognition (more on that in the reflection below), but a bigger, more legible prize amplified the signal that the win meant something without costing more.

The tradeoff: monthly recognition felt smaller. But the program got more honest signal about what the org valued, and the town hall moment plus the larger prize turned the win into something a team could actually point to in their performance narrative.

Outcome

AI Hack & Tell program metrics: 62 submissions across the program, with 10 of 15 DevRel teams participating.
  • 62 submissions across the program.
  • 10 of 15 DevRel teams represented — majority coverage across the org.
  • A repeatable program structure, submission workflow, and judging rubric that other teams adopted as a template.
  • Established AI dogfooding as a normal, recognized activity inside Cloud Advocacy.
  • Cross-pollination across DevRel teams — advocates began picking up submissions from other advocates and integrating them into their own work, which was exactly the multiplier effect the program was designed to create.
  • The forward plan was to scale the model beyond Cloud Advocacy into other Microsoft orgs using the same submission, judging, and recognition structure.

Roadmap / What was next

Phase 2 was scoped to start the week of April 1, with a research pass before any program changes shipped. Several questions had been queued up by changes in the operating environment:

Reorg-driven audience question. A January reorg moved roughly half of DevRel's tech-focused teams into a different org, and those teams had been the majority of submitters in Phase 1. The Phase 2 research was scoped to answer two related questions: would those teams continue submitting now that they were no longer in DevRel, and was this the moment to evangelize the program out of DevRel and into the broader org as a way to grow participation rather than shrink it.

Voting model. Was community voting still the right model, or would moving evaluation into a tighter judging panel (just the AI Hack & Tell team) be a better fit for an expanded audience? This question fed directly into platform choice — because not every candidate tool supported community voting, the voting decision had to be made first, then the platform decision could be scoped against it.

Platform reconsideration. Decision 1 in Phase 1 was a deliberate "GitHub because the audience is GitHub-native." If Phase 2 expanded the audience beyond DevRel, that premise no longer held, and the wider org included people who didn't live on GitHub the way advocates did. The research scoped two alternatives: an internal tool that the larger org used to post AI wins (which carried the side benefit of higher leadership visibility), and Azure DevOps as the org-default developer surface. Sequencing mattered: audience → voting model → platform.

Prize structure. With the budget reallocation from Phase 1 already in place (one quarterly winner, larger prize), Phase 2 was going to re-examine whether that was still the right shape, or whether other recognition surfaces — especially ones with broader-org leadership visibility — would carry more weight than dollar value once the audience extended beyond DevRel's performance framework.

Submission-as-asset. The other open thread was how to convert the submissions themselves into broader, reusable internal projects and workflows — turning the program from a recognition surface into a feeder for shared internal AI tooling. Phase 1 had produced 62 submissions worth investigating. The research was scoped to find which ones had reach beyond their team of origin and what the institutionalization path looked like.

The research was scoped but not started before I left.

Reflection / What I'd do differently

The biggest lesson was that the program design matters less than the recurring evangelism around it. The submission workflow and prize tiers were the artifact people pointed to, but the actual engine was the monthly town-hall reminders, the standing-meeting callouts, and the steady drip of "here's what your peers shipped this month." Without that, the program would have been a beautifully-designed empty box.

If I were doing it again, I'd start with the quarterly cadence from day one rather than learning it through a failed monthly voting model. The data would have led me there sooner if I'd pressure-tested the audience's attention budget before launch instead of after.

I'd also build in a path from internal submission → external content earlier. The submissions were rich raw material for blog posts, conference talks, and customer demos, but the handoff from "we showed it internally" to "we shipped it externally" was a manual lift. A built-in publishing track would have multiplied the program's value.

The most useful thing I learned about motivation: the participants weren't motivated by money — they were motivated by recognition from leadership. A larger budget might have moved the needle if the program had grown into something org-wide, but inside Cloud Advocacy the strongest pull was knowing that leadership was watching, would call out the win in a town hall, and would point to it as the example of what good looked like. That changed how I'd design the incentive structure for any future internal program — start with the recognition surface and the leadership endorsement, not the prize budget.