Recent advancements in both the techniques and accessibility of large language models (LLMs) have opened up unprecedented opportunities to help businesses streamline their operations, decrease expenses, and increase productivity at scale. Additionally, enterprises can use LLM-powered apps to provide innovative and improved services to clients or strengthen customer relationships. For example, enterprises could provide customer support via AI companions or use sentiment analysis apps to extract valuable customer insights. In this course you will gain a strong understanding and practical knowledge of LLM application development by exploring the open-sourced ecosystem including pretrained LLMs, enabling you to get started quickly in developing LLM-based applications.
Learning Objectives
By participating in this workshop, you will:
- Find, pull in, and experiment with the HuggingFace model repository and Transformers API.
- Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification.
- Work with conditioned decoder-style models to take in and generate interesting data formats, styles, and modalities.
- Kickstart and guide generative AI solutions for safe, effective, and scalable natural data tasks.
- Explore the use of LangChain and LangGraph for orchestrating data pipelines and environment-enabled agents.
Prerequisites & Technologies Used
Topics Covered We start with basic LLM usage and agent fundamentals, covering structured outputs, retrieval, and knowledge graphs. We then move to multi-agent concurrency, data flywheels, real-time constraints, and scaling considerations—finishing with a final assessment that has you interfacing with a scalable multi-tenant agent API.
