Max Gärber
The promise of agentic AI is being realized in systems like the Service Copilot that Zeiss microscopes provides for its field service engineers.
The system integrates technical documentation, subject matter expertise, and user-generated insights which are orchestrated and shared with a suite of AI agents.
While it relies heavily on modern LLM technology, it's the system's solid knowledge graph and metadata foundation that make it a success.
We talked about:
Max's work "turning information into value" at PANTOPIX, a technical documentation and information processes consultancy based in Germany
a recent client project working with Zeiss to help their field service engineers operate more efficiently
how their prior knowledge management and machine learning work helped them not only cope, but thrive, at the arrival of ChatGPT and LLMs
the immediate positive stakeholder feedback they received as they incorporated LLMs into their knowledge architecture
how they extended the iiRDS standard with a custom ontology and taxonomies and integrated topic mappings into their system and workflows
an overview of the system architecture and tooling, which includes both a graph database and a vector store, an ontology and taxonomy management tool, and documentation of best practices
their evolution from simple prompt engineering and RAG approach to an agentic orchestration architecture
a few of the agents in their architecture:
a planning agent that organizes and orchestrates
a content agent that replaces the original RAG system
a troubleshooting agent which surfaces past solutions
the good problem they experienced of managing enthusiastic user adoption of the new system
the unexpected benefits to the Zeiss sales team of the system
how subject matter expertise, user generated content, and other insights are captured and used
the crucial role of knowledge management practices, structured content, and semantic technology in building the foundation for an organization's AI capabilities
Max's bio
Maximilian Gärber is Partner and Principal Technical Consultant at PANTOPIX. Max has been working in the field of technical communication for over 15 years.
As a Partner and Technical Consultant at PANTOPIX, he is responsible for the technical consultation and implementation of projects. In addition to project management, Max is responsible for data modelling and process optimization in relation to product information (migration, publication, translation) and product catalogues. He is also responsible for product development and ensures that innovative solutions for our customers are continuously developed and optimized.
Connect with Max online
LinkedIn
PANTOPIX
Resources mentioned in this episode
Industrial Knowledge Graph meets Agentic AI: Service Copilot at ZEISS RMS slide deck
Service Copilot from ZEISS article
Video
Here’s the video version of our conversation:
https://www.youtube.com/embed/ttQOHvvxPyw
Podcast intro transcript
This is the Knowledge Graph Insights podcast, episode number 45. When you're a field service engineer dealing with both the typical challenges of information overload and the need to maintain complex machinery like a high-end Zeiss microscope, you'd really benefit from an intelligent knowledge management system, one that integrates technical documentation, subject matter expertise, and user-generated insights. That's exactly what Max Gärber has built - an agentic AI system grounded in a solid knowledge graph foundation.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 45 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Max Garber. Max did a really interesting presentation at the Semantics conference in Vienna last fall, and I've been trying to get him on the show ever since. So here he is. I'm excited to have him here. Max, he's a partner and a technical consultant at PANTOPIX, a consultancy based here in Germany. Welcome, Max. Tell the folks a little bit more about what you're doing these days.
Max:
Yeah, thanks Larry. Thanks for having me. Yeah, great show. And yeah, we are mostly concerned with helping mainly our industrial customers structure their content and integrate it from various sources into their systems, delivery systems, wherever it is needed. So yeah, it's mainly consultancy on data modeling, on how to do information processes and how to get the best out of your data, so to say. So our mission here is literally turning information into value.
Larry:
Oh, I love that. That's a great tagline for a consultancy. Well, you did the use case, the case study you talked about in Vienna was really interesting to me. This issue of Zeiss microscopes, in particular their research microscopy solutions arm, which is these big, expensive, complex machines that require a lot of service. Can you talk a little bit about how you got involved with Zeiss and what you do to help them? In particular, the thing you talked about in Vienna was about the system to help their field service engineers. Can you talk a little bit about that project?
Max:
Yeah, exactly. The main objective there was helping the field service engineer to get the information in that situation when they need it and in the format they need it. That is essentially the bottom line of it. And it started essentially as a knowledge management project. Zeiss, RMS, they have been really into structuring, getting structured content, adding proper metadata to it so it can be used in various cases. The idea has been to integrate from various sources, spare part system, for example, or the manuals from the technical documentation or ticket information and get them into one system so there's a single point of access for the service technicians. So they don't need to spend a lot of time in all of the different systems that there are to get the information about that case they are currently working on because there's a lot they need to consider when servicing or troubleshooting a microscope.
Max:
And yeah, that project evolved into what is now the Service Copilot because I think it was in early '22 when we started the project. And one part of it was to not only integrate all of that information in one place, but also recommend content to the service technician. So, if you were working on a specific case, so the ticket was known, the product was known, you should get a recommendation of articles, "Hey, this is how you install this and that component," for example. So we actually worked a lot on labeling tickets. We actually had a custom labeling interface and used, let's say, classical machine learning approaches to get that recommendations done.
Max:
And it worked not so good, but that was also the same time when GPT, I think it was 3.0 or 3.5 came out. And yeah, we were faced with that situation that there was a new technology available that looked like it could do everything and much more what we were currently doing without much effort. So we really faced the situation there to either stop the project or reinvent ourselves, I would say.
Larry:
I love that juncture. We were talking a little bit before we went on the air about you were really concerned at that point as this arose, but then it turns out that the prior work you had done, the knowledge management work you had done and the machine learning skills and workflows and things you developed, it turns out you ended up being, to my mind, it looks like from that demo I saw in Vienna, at the leading edge of hybrid AI architectures and agentic AI.
Max:
Yeah, I mean, totally. It evolved really quickly. At the point where we looked into GPT and what language models could do, we asked for, "Hey, can we do some quick prototyping research on this and see if we can replace, let's say, the machine learning pipeline that we had with language models?" And it worked really well from the start. So in the beginning, we had 15 service technicians as pilot users that were constantly evaluating the system and giving us feedback, "Hey, that's good, that's not good." And they said immediately, "Well, this is working really well." I mean, they tried, of course, at the very beginning to trick the system and ask the hard questions. And if you look at the content that they are provided, a service manual, it has hundreds of pages and the products that they are servicing, they look quite similar, but they are quite different.
Max:
So there's a lot of variants in what components you can use, how you configure the system, how you buy it. So it's really important that if you have a certain product variant, you don't mix that up. And if you look at how the content is, it is very similar. So of course they have the same structure or a very similar structure and certain, let's say, chapters or topics, they are always very similar. So how you install electron microscope A is very similar to how you install electron microscope B, but it's the little differences that are really important if you are doing that installation procedure. If you forget one of those steps, of course, you will fail or you could even do some harm to the system. So it's really important that you not only have similar content or similarity in, let's say, the retrieval of the content, but you can actually know, "This is content for product A and this is content for product B."
Max:
So all of the work that went into structuring the content, adding metadata to each of the topics and connecting the metadata based on what entities are linkable, the RAG system that we implemented then, it could actually filter out all of the content that was not relevant to the specific question or use case. So the answers were quite good from the beginning.
Larry:
Yeah. I want to elaborate a bit on the evolution of your RAG architecture, and for folks who don't...