Powered by RND
PodcastsTechnologyMLOps.community

MLOps.community

Demetrios
MLOps.community
Latest episode

Available Episodes

5 of 441
  • Tricks to Fine Tuning // Prithviraj Ammanabrolu // #318
    Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractPrithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment.// BioRaj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems.Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab.// Related LinksWebsite: https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Raj on LinkedIn: /rajammanabroluTimestamps:[00:00] Raj's preferred coffee[00:36] Takeaways[01:02] Tao Naming Decision[04:19] No Labels Machine Learning[08:09] Tao and TAO breakdown[13:20] Reward Model Fine-Tuning[18:15] Training vs Inference Compute[22:32] Retraining and Model Drift[29:06] Prompt Tuning vs Fine-Tuning[34:32] Small Model Optimization Strategies[37:10] Small Model Potential[43:08] Fine-tuning Model Differences[46:02] Mistral Model Freedom[53:46] Wrap up
    --------  
    54:01
  • Packaging MLOps Tech Neatly for Engineers and Non-engineers // Jukka Remes // #322
    Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he’s worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up
    --------  
    55:30
  • Hard Learned Lessons from Over a Decade in AI
    Tecton⁠ Founder and CEO Mike Del Balso talks about what ML/AI use cases are core components generating Millions in revenue. Demetrios and Mike go through the maturity curve that predictive Machine Learning use cases have gone through over the past 5 years, and why a feature store is a primary component of an ML stack.// BioMike Del Balso is the CEO and co-founder of Tecton, where he’s building the industry’s first feature platform for real-time ML. Before Tecton, Mike co-created the Uber Michelangelo ML platform. He was also a product manager at Google where he managed the core ML systems that power Google’s Search Ads business. He studied Applied Science, Electrical & Computer Engineering at the University of Toronto.// Related LinksWebsite: www.tecton.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Mike on LinkedIn: /michaeldelbalsoTimestamps:[00:00] Smarter decisions, less manual work[03:52] Data pipelines: pain and fixes[08:45] Why Tecton was born[11:30] ML use cases shift[14:14] Models for big bets[18:39] Build or buy drama[20:20] Fintech's data playbook[23:52] What really needs real-time[28:07] Speeding up ML delivery[32:09] Valuing ML is tricky[35:29] Simplifying ML toolkits[37:18] AI copilots in action[42:13] AI that fights fraud[45:07] Teaming up across coasts[46:43] Tecton + Generative AI?
    --------  
    48:42
  • Tricks to Fine Tuning // Prithviraj Ammanabrolu // #318
    Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractPrithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment.// BioRaj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems.Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab.// Related LinksWebsite: https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Raj on LinkedIn: /rajammanabroluTimestamps:[00:00] Raj's preferred coffee[00:36] Takeaways[01:02] Tao Naming Decision[04:19] No Labels Machine Learning[08:09] Tao and TAO breakdown[13:20] Reward Model Fine-Tuning[18:15] Training vs Inference Compute[22:32] Retraining and Model Drift[29:06] Prompt Tuning vs Fine-Tuning[34:32] Small Model Optimization Strategies[37:10] Small Model Potential[43:08] Fine-tuning Model Differences[46:02] Mistral Model Freedom[53:46] Wrap up
    --------  
    55:33
  • Product Metrics are LLM Evals // Raza Habib CEO of Humanloop // #320
    Raza Habib, the CEO of LLM Eval platform Humanloop, talks to us about how to make your AI products more accurate and reliable by shortening the feedback loop of your evals. Quickly iterating on prompts and testing what works, along with some of his favorite Dario from Anthropic AI Quotes.// BioRaza is the CEO and Co-founder at Humanloop. He has a PhD in Machine Learning from UCL, was the founding engineer of Monolith AI, and has built speech systems at Google. For the last 4 years, he has led Humanloop and supported leading technology companies such as Duolingo, Vanta, and Gusto to build products with large language models. Raza was featured in the Forbes 30 Under 30 technology list in 2022, and Sifted recently named him one of the most influential Gen AI founders in Europe.// Related LinksWebsites: https://humanloop.com~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Raza on LinkedIn: /humanloop-razaTimestamps:[00:00] Cracking Open System Failures and How We Fix Them[05:44] LLMs in the Wild — First Steps and Growing Pains[08:28] Building the Backbone of Tracing and Observability[13:02] Tuning the Dials for Peak Model Performance[13:51] From Growing Pains to Glowing Gains in AI Systems[17:26] Where Prompts Meet Psychology and Code[22:40] Why Data Experts Deserve a Seat at the Table[24:59] Humanloop and the Art of Configuration Taming[28:23] What Actually Matters in Customer-Facing AI[33:43] Starting Fresh with Private Models That Deliver[34:58] How LLM Agents Are Changing the Way We Talk[39:23] The Secret Lives of Prompts Inside Frameworks[42:58] Streaming Showdowns — Creativity vs. Convenience[46:26] Meet Our Auto-Tuning AI Prototype[49:25] Building the Blueprint for Smarter AI[51:24] Feedback Isn’t Optional — It’s Everything
    --------  
    53:06

More Technology podcasts

About MLOps.community

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
Podcast website

Listen to MLOps.community, Dwarkesh Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

MLOps.community: Podcasts in Family

Social
v7.18.5 | © 2007-2025 radio.de GmbH
Generated: 6/13/2025 - 3:14:52 AM