Deep Dive LLM Technique for Engineers and Developers

About This Course

This 3-day, hands-on course - Deep Dive LLM Technique for Engineers & Developers equips participants with Large Language Models (LLMs) fundamental with applied engineering skills to work directly with, using both cloud-based APIs and local deployment tools. From grasping the core concepts behind transformer architecture, Generative Pre-trained Transformers (GPTs), and open-source model ecosystems, to exploring cutting-edge techniques such as Prompt Engineering, Retrieval-Augmented Generation (RAG), and lightweight fine-tuning (LoRA/PEFT) - you'll gain both theoretical understanding and practical skills. Participants will work directly with tools like Google Colab, LM Studio, Ollama, Hugging Face, and experimenting with local and cloud-based LLM deployments.

Learning Objectives

Upon completing this course, you will be able to:

  • Understand the foundational concepts of LLMs, including their architecture and training mechanisms.
  • Master prompt engineering to effectively communicate with and utilize LLMs.
  • Use Google Colab to test and evaluate LLMs via APIs and build prototype workflows.
  • Compare and choose cloud platforms (Hugging Face, Together.ai, NVIDIA NIM) for LLM hosting.
  • Analyze trade-offs in model selection: open-weight vs closed, latency vs cost, scale vs quality.
  • Install and operate quantized local models using LM Studio and Ollama
  • Understand and apply PEFT/LoRA for cost-efficient fine-tuning, and RAG for enhancing factuality.

Prerequisites

  • Attendees must have a Laptop with wifi connection.
  • Basic Python Programming: Understanding of Python is essential, though the course includes a primer for those with less experience.
  • Fundamental AI Concepts: A general grasp of AI and machine learning concepts is helpful, though not mandatory as foundational lessons are included.
  • Familiarity with APIs and web services: would be helpful.
  • Interest in AI Development: A keen interest in learning and applying AI technologies, particularly LLMs.

Target Audience

  •  
  • Developers and Engineers looking to specialize in AI, particularly with LLMs.
  • Practitioners aim to expand their toolkit with LLM development skills for enhanced data analysis and AI product development.
  • Entrepreneurs and Innovators: Seeking to integrate AI into their business models or products.
  •  

Training Outline

  1. Introduction to Machine Learning, Generative AI and LLMs
  2. Exploring LLM Use Cases with Chatbot Strengths & Limitations
  3. Mastering Prompt Engineering
  4. Introduction to Google Colab and API Access
  5. Hosting Platforms and Deployment Criteria
  6. Hosting Your Own LLMs with LM Studio and Ollama
  7. RAG Fundamentals with LangChain and LlamaIndex
  8. Embeddings & Vector Databases for RAG
  9. Fine Tuning and Advanced LLM Techniques PEFT, LoRA
  10. Lab Implementing RAG with NVIDIA Endpoints
  11. Ethics & Future of AI