Ep159: Why Agentic AI Projects Fail (and How To Avoid It)
Industry leaders from Coder, Scale AI, and Suger reveal why 95% of AI pilots fail—and share the frameworks that actually work to get agents into production.Topics Include:Panel features leaders from Coder, Scale AI, and Suger discussing agentic AI.MIT report reveals 95% of AI pilots fail to reach production.Challenges are rarely technical—they're organizational, mindset, and people-driven instead.Companies lack documented tribal knowledge needed to train agents effectively.Many organizations attempt AI where deterministic, rules-based automation would work better."Freestyle agents" concept: Some problems shouldn't be solved by agents at all.Regulated industries struggle when asking agents to handle highly differentiated, complex tasks.Common mistakes: building one universal agent or separate agents for every use case.Post-billing workflows and business-critical operations aren't ready for AI's black box.VCs pressure companies to define "AI-native"—but nobody has clear answers yet.Scale AI uses five maturity levels; Coder uses three tiers for adoption.Success metrics span operational readiness, business impact, and technology performance indicators.Production requires data governance, context, A/B testing, and robust fallback mechanisms.Even Anthropic uses agents conservatively: research tasks and log triage, no write-access.Path to 50% success requires agile frameworks, people change, and proper AI talent.Participants:Ben Potter - VP of Product, CoderRaviteja Yelamanchili - Head of Solutions Engineering, Scale AIJon Yoo - CEO, SugerAdam Ross - US, Partner Sales Sr. Leader, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/