Powered by RND
PodcastsNewsScrum Master Toolbox Podcast: Agile storytelling from the trenches

Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Vasco Duarte, Agile Coach, Certified Scrum Master, Certified Product Owner
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Latest episode

Available Episodes

5 of 345
  • Why Great Scrum Masters Create Space for Breaks | Scott Smith
    Scott Smith: Why Great Scrum Masters Create Space for Breaks Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "Think of the people involved. Put yourself in the shoes of the other." - Scott Smith   Scott found himself in the middle of rising tension as voices escalated between the Product Owner and the development team. The PO was harsh, emotions were running high, and the conflict was intensifying with each exchange. In that moment, Scott knew he had to act.  He stepped in with a simple but powerful reminder: "We're on the same team." That pause—that momentary break—allowed everyone to step back and reset. Both the PO and the team members later thanked Scott for his intervention, acknowledging they needed that space to cool down and refocus on their shared outcome.  Scott's approach centers on empathy and perspective-taking. He emphasizes thinking about the people involved and putting yourself in their shoes. When tensions rise, sometimes the most valuable contribution a Scrum Master can make is creating space for a break, reminding everyone of the shared goal, and helping teams focus on the outcome rather than the conflict. It's not about taking sides—it's about serving the team by being the calm presence that brings everyone back to what matters most.   Self-reflection Question: When you witness conflict between team members or between the team and Product Owner, do you tend to jump in immediately or create space for the parties to find common ground themselves? Featured Book of the Week: An Ex-Manager Who Believed "It was about having someone who believed in me." - Scott Smith   Scott's most influential "book" isn't printed on pages—it's a person. After spending 10 years as a Business Analyst, Scott decided to take the Professional Scrum Master I (PSM I) course and look for a Scrum Master position. That transition wasn't just about skills or certification; it was about having an ex-manager who inspired him to chase his goals and truly believed in him. This person gave Scott the confidence to make a significant career pivot, demonstrating that sometimes the most powerful catalyst for growth is someone who sees your potential before you fully recognize it yourself. Scott's story reminds us that great leadership isn't just about managing tasks—it's about inspiring people to reach for goals they might not have pursued alone. The belief and encouragement of a single person can change the trajectory of someone's entire career.   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Scott Smith   Scott Smith is a 53-year-old professional based in Perth, Australia. He balances a successful career with a strong focus on health and fitness, currently preparing for bodybuilding competitions in 2026. With a background in leadership and coaching, Scott values growth, discipline, and staying relevant in a rapidly changing world.   You can link with Scott Smith on LinkedIn.
    --------  
    14:24
  • The Spotlight Failure That Taught a Silent Lesson About Recognition | Scott Smith
    Scott Smith: The Spotlight Failure That Taught a Silent Lesson About Recognition Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "Not everybody enjoys the limelight and being called out, even for great work." - Scott Smith   Scott was facilitating a multi-squad showcase with over 100 participants, and everything seemed to be going perfectly. Each squad had their five-minute slot to share achievements from the sprint, and Scott was coordinating the entire event. When one particular team member delivered what Scott considered fantastic work, he couldn't help but publicly recognize them during the introduction.  It seemed like the perfect moment to celebrate excellence in front of the entire organization. But then his phone rang. The individual he had praised was unhappy—really unhappy. What Scott learned in that moment transformed his approach to recognition forever. The person was quiet, introverted, and conservative by nature.  Being called out without prior notice or permission in front of 100+ people wasn't a reward—it was uncomfortable and unwelcome. Scott discovered that even positive recognition requires consent and awareness of individual preferences. Some people thrive in the spotlight, while others prefer their contributions to be acknowledged privately. The relationship continued well afterward, but the lesson stuck: check in with individuals before publicly recognizing them, understanding that great coaching means respecting how people want to be celebrated, not just that they should be celebrated.   Self-reflection Question: How do you currently recognize team members' achievements, and have you asked each person how they prefer to be acknowledged for their contributions?   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Scott Smith   Scott Smith is a 53-year-old professional based in Perth, Australia. He balances a successful career with a strong focus on health and fitness, currently preparing for bodybuilding competitions in 2026. With a background in leadership and coaching, Scott values growth, discipline, and staying relevant in a rapidly changing world.   You can link with Scott Smith on LinkedIn.  
    --------  
    12:20
  • BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age | Mo Edjlali
    BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age In this thought-provoking conversation, former computer engineer and mindfulness leader Mo Edjlali explores how AI is reshaping human meaning, attention, and decision-making. We examine the critical question: what happens when AI knows your emotional triggers better than you know yourself? Mo shares insights on remaining sovereign over our attention, avoiding dependency in both mindfulness and technology, and preparing for a world where AI may outperform us in nearly every domain. From Technology Pioneer to Mindfulness Leader "I've been very heavily influenced by technology, computer engineering, software development. I introduced DevOps to the federal government. But I have never seen anything change the way in which human beings work together like Agile." — Mo Edjlali   Mo's journey began in the tech world — graduating in 1998, he was on the front line of the internet explosion. He remembers the days before the internet, watched online multiplayer games emerge in 1994, and worked on some of the most complicated tech projects in federal government. Technology felt almost like magic, advancing at a logarithmic rate faster than anything else. But when Mo discovered mindfulness practices 12-15 years ago, he found something equally transformative: actual exercises to develop emotional intelligence and soft skills that the tech world talked about but never taught. Mindfulness provided logical, practical methods that didn't require "woo-woo" beliefs — just practice that fundamentally changed his relationship with his mind. This dual perspective — tech innovator and mindfulness teacher — gives Mo a unique lens for understanding where we're headed. The Shift from Liberation to Dependency "I was fortunate enough, the teachers I was exposed to, the mentality was very much: you're gonna learn how to meditate on your own, in silence. There is no guru. There is no cult of personality." — Mo Edjlali   Mo identifies a dangerous drift in the mindfulness movement: from teaching independence to creating dependency. His early training, particularly a Vipassana retreat led by S.N. Goenka, modeled true liberation — you show up for 10 days, pay nothing, receive food and lodging, learn to meditate, then donate what you can at the end. Critically, you leave being able to meditate on your own without worshiping a teacher or subscribing to guided meditations. But today's commercialized mindfulness often creates the opposite: powerful figures leading fiefdoms, consumers taught to listen to guided meditations rather than meditate independently. This dependency model mirrors exactly what's happening with AI — systems designed to make us rely on them rather than empower our own capabilities. Recognizing this parallel is essential for navigating both fields wisely. AI as a New Human Age, Not Just Another Tool "With AI, this is different. This isn't like mobile computing, this isn't like the internet. We're entering a new age. We had the Bronze Age, the Iron Age, the Industrial Age. When you enter a new age, it's almost like knocking the chess board over, flipping the pieces upside down. We're playing a new game." — Mo Edjlali   Mo frames AI not as another technology upgrade but as the beginning of an entirely new human age. In a new age, everything shifts: currency, economies, government, technology, even religions. The documentary about the Bronze Age collapse taught him that when ages turn over, the old rules no longer apply. This perspective explains why AI feels fundamentally different from previous innovations. ChatGPT 2.0 was interesting; ChatGPT 3 blew Mo's mind and made him realize we're witnessing something unprecedented. While he's optimistic about the potential for sustainable abundance and extraordinary breakthroughs, he's also aware we're entering both the most exciting and most frightening time to be alive. Everything we learned in high school might be proven wrong as AI rewrites human knowledge, translates animal languages, extends longevity, and achieves things we can't even imagine. The Mental Health Tsunami and Loss of Purpose "If we do enter the age of abundance, where AI could do anything that human beings could do and do it better, suddenly the system we have set up — where our purpose is often tied to our income and our job — suddenly, we don't need to work. So what is our purpose?" — Mo Edjlali   Mo offers a provocative vision of the future: a world where people might pay for jobs rather than get paid to work. It sounds crazy until you realize it's already happening — people pay $100,000-$200,000 for college just to get a job, politicians spend millions to get elected. If AI handles most work and we enter an age of abundance, jobs won't be about survival or income — they'll be about meaning, identity, and social connection. This creates three major crises Mo sees accelerating: attacks on our focus and attention (technology hijacking our awareness), polarization (forcing black-and-white thinking), and isolation (pushing us toward solo experiences). The mental health tsunami is coming as people struggle to find purpose in a world where AI outperforms them in domain after domain. The jobs will change, the value systems will shift, and those without tools for navigating this transformation will suffer most. When AI Reads Your Mind "Researchers at Duke University had hooked up fMRI brain scanning technology and took that data and fed it into GPT 2. They were able to translate brain signals into written narrative. So the implications are that we could read people's minds using AI." — Mo Edjlali   The future Mo describes isn't science fiction — it's already beginning. Three years ago, researchers used early GPT to translate brain signals into written text by scanning people's minds with fMRI and training AI on the patterns. Today, AI knows a lot about heavy users like Mo through chat conversations. Tomorrow, AI will have video input of everything we see, sensory input from our biometrics (pulse, heart rate, health indicators), and potentially direct connection to our minds. This symbiotic relationship is coming whether we're ready or not. Mo demonstrates this with a personal experiment: he asked his AI to tell him about himself, describe his personality, identify his strengths, and most powerfully — reveal his blind spots. The AI's response was outstanding, better than what any human (even his therapist or himself) could have articulated. This is the reality we're moving toward: AI that knows our emotional triggers, blind spots, and patterns better than we do ourselves. Using AI as a Mirror for Self-Discovery "I asked my AI, 'What are my blind spots?' Human beings usually won't always tell you what your blind spots are, they might not see them. A therapist might not exactly see them. But the AI has... I've had the most intimate kind of conversations about everything. And the response was outstanding." — Mo Edjlali   Mo's approach to AI is both pragmatic and experimental. He uses it extensively — at the level of teenagers and early college students who are on it all the time. But rather than just using AI as a tool, he treats it as a mirror for understanding himself. Asking AI to identify your blind spots is a powerful exercise because AI has observed all your conversations, patterns, and tendencies without the human limitations of forgetfulness or social politeness. Vasco shares a similar experience using AI as a therapy companion — not replacing his human therapist, but preparing for sessions and processing afterward. This reveals an essential truth: most of us don't understand ourselves that well. We're blind navigators using an increasingly powerful tool. The question isn't whether AI will know us better than we know ourselves — that's already happening. The question is how we use that knowledge wisely. The Danger of AI Hijacking Our Agency "There's this real danger. I saw that South Park episode about ChatGPT where his wife is like, 'Come on, put the AI down, talk to me,' and he's got this crazy business idea, and the AI keeps encouraging him along. It's a point where he's relying way too heavily on the AI and making really poor decisions." — Mo Edjlali   Not all AI use is beneficial. Mo candidly admits his own mistakes — sometimes leaning into AI feedback over his actual users' feedback for his Meditate Together app because "I like what the AI is saying." This mirrors the South Park episode's warning about AI dependency, where the character's AI encourages increasingly poor decisions while his relationships suffer. Social media demonstrates this danger at scale: AI algorithms tuned to steal our attention and hijack our agency, preventing us from thinking about what truly matters — relationships and human connection. Mo shares a disturbing story about Zoom bombers disrupting Meditate Together sessions, filming it, posting it on YouTube where it got 90,000 views, with comments thanking the disruptors for "making my day better." Technology created a cannibalistic dynamic where teenagers watched videos of their mothers, aunts, and grandmothers being harassed during meditation. When Mo tried to contact Google, the company's incentive structure prioritized views and revenue over human decency. Technology combined with capitalism creates these dangerous momentum toward monetizing attention at any cost. Remaining Sovereign Over Your Attention "Traditionally, mindfulness does an extraordinary job, if you practice right, to help you regain your agency of your focus and concentration. It takes practice. But reading is now becoming a concentration practice. It's an actual practice." — Mo Edjlali   Mo identifies three major symptoms affecting us: attacks on focus/attention, polarization into black-and-white thinking, and isolation. Mindfulness practices directly counter all three — but only if practiced correctly. Training attention, focus, and concentration requires actual practice, not just listening to guided meditations. Mo offers practical strategies: reading as concentration practice (asking "does anyone read anymore?" recognizing that sustained reading now requires deliberate effort), turning off AirPods while jogging or driving to find silence, spending time alone with your thoughts, and recognizing that we were given extraordinary power (smartphones) with zero training on how to be aware of it. Older generations remember having to rewind VHS tapes — forced moments of patience and stillness that no longer exist. We need to deliberately recreate those spaces where we're not constantly consuming entertainment and input. Dialectic Thinking: Beyond Polarization "I saw someone the other day wear a shirt that said, 'I'm perfect the way I am.' That's one-dimensional thinking. Two-dimensional thinking is: you're perfect the way that you are, and you could be a little better." — Mo Edjlali   Mo's book OpenMBSR specifically addresses polarization by introducing dialectic thinking — the ability to hold paradoxes and seeming contradictions simultaneously. Social media and algorithms push us toward one-dimensional, black-and-white thinking: good/bad, right/wrong, with me/against me. But reality is far more nuanced. The ability to think "I'm perfect as I am AND I can improve" or "AI is extraordinary AND dangerous" is essential for navigating complexity. This mirrors the tech world's embrace of continuous improvement in Agile — accepting where you are while always pushing for better. Chess players learned this years ago when AI defeated humans — they didn't freak out, they accepted it and adapted. Now AI in chess doesn't just give answers; it helps humans understand how it arrived at those answers. This partnership model, where AI coaches us through complexity rather than simply replacing us, represents the healthiest path forward. Building Community, Not Dependency "When people think to meditate, unfortunately, they think, I have to do this by myself and listen to guided meditation. I'm saying no. Do it in silence. If you listen to guided meditation, listen to guided meditation that teaches you how to meditate in silence. And do it with other people, with intentional community." — Mo Edjlali   Mo's OpenMBSR initiative explicitly borrows from the Agile movement's success: grassroots, community-centric, open source, transparent. Rather than creating fiefdoms around cult personalities, he wants mindfulness to spread organically through communities helping communities. This directly counters the isolation trend that technology accelerates. Meditate Together exists specifically to create spaces where people meditate with other human beings around the world, with volunteer hosts holding sessions. The model isn't about dependency on a teacher or platform — it's about building connection and shared practice. This aligns perfectly with how the tech world revolutionized collaborative work through Agile and Scrum: transparent, iterative, valuing individuals and interactions. The question for both mindfulness and AI adoption is whether we'll create systems that empower independence and community, or ones that foster dependency and isolation. Preparing for a World Where AI Outperforms Humans "AI is going to need to kind of coach us and ease us into it, right? There's some really dark, ugly things about ourselves that could be jarring without it being properly shared, exposed, and explained." — Mo Edjlali   Looking at his children, Mo wonders what tools they'll need in a world where AI may outperform humans in nearly every domain. The answer isn't trying to compete with AI in calculation, memory, or analysis — that battle is already lost. Instead, the essential human skills become self-awareness, emotional intelligence, dialectic thinking, community building, and maintaining agency over attention and decision-making. AI will need to become a coach, helping humans understand not just answers but how it arrived at those answers. This requires AI development that prioritizes human growth over profit maximization. It also requires humans willing to do the hard work of understanding themselves — confronting blind spots, managing emotional triggers, practicing concentration, and building genuine relationships. The mental health tsunami Mo predicts isn't inevitable if we prepare now by teaching these skills widely, building community-centric systems, and designing AI that empowers rather than replaces human wisdom and connection.   About Mo Edjlali   Mo Edjlali is a former computer engineer, and also the founder and CEO of Mindful Leader, the world's largest provider of Mindfulness-Based Stress Reduction training. Mo's new book Open MBSR: Reimagining the Future of Mindfulness explores how ancient practices can help us navigate the AI revolution with awareness and resilience.   You can learn more about Mo and his work at MindfulLeader.org, check out Meditate Together, and read his articles on AI's Mind-Reading Breakthrough and AI: Not Another Tool, but a New Human Age.
    --------  
    40:21
  • AI Assisted Coding: Building Reliable Software with Unreliable AI Tools With Lada Kesseler
    AI Assisted Coding: Building Reliable Software with Unreliable AI Tools In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering. From Skeptic to Pioneer: Lada's AI Coding Journey "I got a new skill for free!"   Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI. Understanding Vibecoding vs. AI-Assisted Development "AI assisted coding requires judgment, and it's never been as important to exercise judgment as now."   Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices. The Answer Injection Anti-Pattern When Working With AI "You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect."   One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability.   Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you. Never Trust a Single LLM: Multi-Agent Collaboration "Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements."   Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review. Code Quality Matters MORE with AI "This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!"   Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively. Managing Complexity: The Open Question "If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it."   One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever. Context is Everything: Highway vs. Parking Lot "You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt."   Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down. The Era of Discovery: We've Only Just Begun "We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of."   Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time. Resources For Further Study Video of Lada's talk: https://www.youtube.com/watch?v=_LSK2bVf0Lc&t=8654s Lada's Patterns and Anti-patterns website: https://lexler.github.io/augmented-coding-patterns/ Lada's Substack https://lexler.substack.com/ AI Assisted Coding episode with Dawid Dahl AI Assisted Coding episode with Llewellyn Falco Claude Flow - orchestration platform   About Lada Kesseler   Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering.   Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools.   You can link with Lada Kesseler on LinkedIn.
    --------  
    39:08
  • AI Assisted Coding: Transactional AI Development - Commit, Validate, and Rollback With Sergey Sergyenko
    AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do."   Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step."   One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development."   Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions.  This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent."   Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code."   Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake."   Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype."   Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer."   Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful."   Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time.   About Sergey Sergyenko   Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS.   Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time.   You can link with Sergey Sergyenko on LinkedIn.
    --------  
    41:03

More News podcasts

About Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Every week day, Certified Scrum Master, Agile Coach and business consultant Vasco Duarte interviews Scrum Masters and Agile Coaches from all over the world to get you actionable advice, new tips and tricks, improve your craft as a Scrum Master with daily doses of inspiring conversations with Scrum Masters from the all over the world. Stay tuned for BONUS episodes when we interview Agile gurus and other thought leaders in the business space to bring you the Agile Business perspective you need to succeed as a Scrum Master. Some of the topics we discuss include: Agile Business, Agile Strategy, Retrospectives, Team motivation, Sprint Planning, Daily Scrum, Sprint Review, Backlog Refinement, Scaling Scrum, Lean Startup, Test Driven Development (TDD), Behavior Driven Development (BDD), Paper Prototyping, QA in Scrum, the role of agile managers, servant leadership, agile coaching, and more!
Podcast website

Listen to Scrum Master Toolbox Podcast: Agile storytelling from the trenches, The Bridge with Peter Mansbridge and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.0.5 | © 2007-2025 radio.de GmbH
Generated: 12/2/2025 - 5:30:46 PM