Building on the work of the open source community, we've integrated powerful inference engines like llama.cpp directly into Docker tooling — including Docker Desktop, and soon Docker CE. To further streamline AI workflows, we're introducing the OCI Model Specification: a new subset of the OCI Artifact format, purpose-built for distributing LLM artifacts. And of course, we're doing it all as open source. In this session, we'll explore how to use LangChain4j — the open-source Java variant of the popular LangChain framework — together with Docker Model Runner to create a local development experience for building GenAI applications. We'll walk through packaging, publishing, and using custom models in the OCI Model format, making them easily accessible via any container registry that supports OCI Artifacts.
Talk Level:
INTERMEDIATE
Bio:
Dorin is a Software Engineer at Docker, where he worked closely on systems components like Linux kernel modules, FUSE (virtio-fs), and eBPF programs. He's now diving deep into AI Engineering to help deliver a better development experience for Docker users. Dorin isn't afraid of tackling new challenges — he enjoys going both deep and wide across the stack.