LLM Engineering: Master AI, Large Language Models & Agents

Categories:Artificial EngineeringData Science & AI
Course preview
$49.99$9.99Save 80%
What's included
Certificate of completion

Create an account to start learning

Day 1 - Cold Open Jumping Right into LLM EngineeringFree Preview
00:37Click to preview
Day 1 - Setting Up Ollama for Local LLM Deployment on Windows and Mac
4:15
Day 1 - Unleashing the Power of Local LLMs Build Spanish Tutor Using Ollama
4:07
Day 1 - LLM Engineering Roadmap From Beginner to Master in 8 Weeks
5:45
Day 1 - Building LLM Applications Chatbots, RAG, and Agentic AI Projects
1:49
Day 1 - From Wall Street to AI Ed Donner's Path to Becoming an LLM Engineer
2:07
Day 1 - Setting Up Your LLM Development Environment Tools and Best Practices
6:11
Day 1 - Mac Setup Guide Jupyter Lab and Conda for LLM Projects
6:54
Day 1 - Setting Up Anaconda for LLM Engineering Windows Installation Guide
11:38
Day 1 - Alternative Python Setup for LLM Projects Virtualenv vs. Anaconda Guide
6:32
Day 1- Setting Up OpenAI API for LLM Development Keys, Pricing & Best Practices
7:14
Day 1 - Creating a .env File for Storing API Keys Safely
5:00
Day 1- Instant Gratification Project Creating an AI-Powered Web Page Summarizer
9:31
Day 1 - Implementing Text Summarization Using OpenAI's GPT-4 and Beautiful Soup
13:36
Day 1 - Wrapping Up Day 1 Key Takeaways and Next Steps in LLM Engineering
2:48
Day 2 - Mastering LLM Engineering Key Skills and Tools for AI Development
6:52
Day 2 - Understanding Frontier Models GPT, Claude, and Open Source LLMs
7:42
Day 2 - How to Use Ollama for Local LLM Inference Python Tutorial with Jupyter
6:55
Day 2 - Hands-On LLM Task Comparing OpenAI and Ollama for Text Summarization
00:36
Day 3 - Frontier AI Models Comparing GPT-4, Claude, Gemini, and LLAMA
7:38
Day 3 - Comparing Leading LLMs Strengths and Business Applications
1:49
Day 3 - Exploring GPT-4o vs O1 Preview Key Differences in Performance
3:54
Day 3 - Creativity and Coding Leveraging GPT-4o’s Canvas Feature
6:31
Day 3 - Claude 3.5’s Alignment and Artifact Creation A Deep Dive
5:26
Day 3 - AI Model Comparison Gemini vs Cohere for Whimsical and Analytical Tasks
4:46
Day 3 - Evaluating Meta AI and Perplexity Nuances of Model Outputs
4:36
Day 3 - LLM Leadership Challenge Evaluating AI Models Through Creative Prompts
5:41
Day 4 - Revealing the Leadership Winner A Fun LLM Challenge
7:50
Day 4 - Exploring the Journey of AI From Early Models to Transformers
3:02
Day 4 - Understanding LLM Parameters From GPT-1 to Trillion-Weight Models
5:01
Day 4 - GPT Tokenization Explained How Large Language Models Process Text Input
10:41
Day 4 - How Context Windows Impact AI Language Models Token Limits Explained
3:14
Day 4 - Navigating AI Model Costs API Pricing vs. Chat Interface Subscriptions
2:48
Day 4 - Comparing LLM Context Windows GPT-4 vs Claude vs Gemini 1.5 Flash
5:22
Day 4 - Wrapping Up Day 4 Key Takeaways and Practical Insights
2:40
Day 5 - Building AI-Powered Marketing Brochures with OpenAI API and Python
3:08
Day 5 - JupyterLab Tutorial Web Scraping for AI-Powered Company Brochures
6:20
Day 5 - Structured Outputs in LLMs Optimizing JSON Responses for AI Projects
9:20
Day 5 - Creating and Formatting Responses for Brochure Content
8:39
Day 5 - Final Adjustments Optimizing Markdown and Streaming in JupyterLab
9:50
Day 5 - Mastering Multi-Shot Prompting Enhancing LLM Reliability in AI Projects
4:22
Day 5 - Assignment Developing Your Customized LLM-Based Tutor
4:07
Day 5 - Wrapping Up Week 1 Achievements and Next Steps
2:57

Day 1 - Mastering Multiple AI APIs OpenAI, Claude, and Gemini for LLM Engineers
4:50
Day 1 - Streaming AI Responses Implementing Real-Time LLM Output in Python
16:14
Day 1 - How to Create Adversarial AI Conversations Using OpenAI and Claude APIs
11:18
Day 1 - AI Tools Exploring Transformers & Frontier LLMs for Developers
1:20
Day 2 - Building AI UIs with Gradio Quick Prototyping for LLM Engineers
3:03
Day 2 - Gradio Tutorial Create Interactive AI Interfaces for OpenAI GPT Models
11:41
Day 2 - Implementing Streaming Responses with GPT and Claude in Gradio UI
5:58
Day 2 - Building a Multi-Model AI Chat Interface with Gradio GPT vs Claude
8:22
Day 2 - Building Advanced AI UIs From OpenAI API to Chat Interfaces with Gradio
1:23
Day 3 - Building AI Chatbots Mastering Gradio for Customer Support Assistants
4:43
Day 3 - Build a Conversational AI Chatbot with OpenAI & Gradio Step-by-Step
12:36
Day 3 - Enhancing Chatbots with Multi-Shot Prompting and Context Enrichment
10:12
Day 3 - Mastering AI Tools Empowering LLMs to Run Code on Your Machine
2:35
Day 4 - Using AI Tools with LLMs Enhancing Large Language Model Capabilities
7:27
Day 4 - Building an AI Airline Assistant Implementing Tools with OpenAI GPT-4
6:10
Day 4 - How to Equip LLMs with Custom Tools OpenAI Function Calling Tutorial
11:36
Day 4 - Mastering AI Tools Building Advanced LLM-Powered Assistants with APIs
1:13
Day 5 - Multimodal AI Assistants Integrating Image and Sound Generation
5:01
Day 5 - Multimodal AI Integrating DALL-E 3 Image Generation in JupyterLab
8:27
Day 5 - Build a Multimodal AI Agent Integrating Audio & Image Tools
6:53
Day 5 - How to Build a Multimodal AI Assistant Integrating Tools and Agents
4:32

Day 1 - Hugging Face Tutorial Exploring Open-Source AI Models and Datasets
10:38
Day 1 - Exploring HuggingFace Hub Models, Datasets & Spaces for AI Developers
12:39
Day 1 - Intro to Google Colab Cloud Jupyter Notebooks for Machine Learning
3:16
Day 1 - Hugging Face Integration with Google Colab Secrets and API Keys Setup
10:35
Day 1 - Mastering Google Colab Run Open-Source AI Models with Hugging Face
1:58
Day 2 - Hugging Face Transformers Using Pipelines for AI Tasks in Python
4:07
Day 2 - Hugging Face Pipelines Simplifying AI Tasks with Transformers Library
13:11
Day 2 - Mastering HuggingFace Pipelines Efficient AI Inference for ML Tasks
1:41
Day 3 - Exploring Tokenizers in Open-Source AI Llama, Phi-2, Qwen, & Starcoder
5:12
Day 3 - Tokenization Techniques in AI Using AutoTokenizer with LLAMA 3.1 Model
11:00
Day 3 - Comparing Tokenizers Llama, PHI-3, and QWEN2 for Open-Source AI Models
11:35
Day 3 - Hugging Face Tokenizers Preparing for Advanced AI Text Generation
00:51
Day 4 - Hugging Face Model Class Running Inference on Open-Source AI Models
3:38
Day 4 - Hugging Face Transformers Loading & Quantizing LLMs with Bits & Bytes
14:52
Day 4 - Hugging Face Transformers Generating Jokes with Open-Source AI Models
10:44
Day 4 - Mastering Hugging Face Transformers Models, Pipelines, and Tokenizers
1:31
Day 5 - Combining Frontier & Open-Source Models for Audio-to-Text Summarization
3:25
Day 5 - Using Hugging Face & OpenAI for AI-Powered Meeting Minutes Generation
12:56
Day 5 - Build a Synthetic Test Data Generator Open-Source AI Model for Business
4:49

Day 1 - How to Choose the Right LLM Comparing Open and Closed Source Models
12:14
Day 1 - Chinchilla Scaling Law Optimizing LLM Parameters and Training Data Size
5:45
Day 1 - Limitations of LLM Benchmarks Overfitting and Training Data Leakage
7:26
Day 1 - Evaluating Large Language Models 6 Next-Level Benchmarks Unveiled
6:29
Day 1 - HuggingFace OpenLLM Leaderboard Comparing Open-Source Language Models
9:33
Day 1 - Master LLM Leaderboards Comparing Open Source and Closed Source Models
1:48
Day 2 - Comparing LLMs Top 6 Leaderboards for Evaluating Language Models
5:45
Day 2 - Specialized LLM Leaderboards Finding the Best Model for Your Use Case
8:05
Day 2 - LLAMA vs GPT-4 Benchmarking Large Language Models for Code Generation
13:05
Day 2 - Human-Rated Language Models Understanding the LM Sys Chatbot Arena
7:21
Day 2 - Commercial Applications of Large Language Models From Law to Education
5:44
Day 2 - Comparing Frontier and Open-Source LLMs for Code Conversion Projects
3:00
Day 3 - Leveraging Frontier Models for High-Performance Code Generation in C++
2:37
Day 3 - Comparing Top LLMs for Code Generation GPT-4 vs Claude 3.5 Sonnet
8:24
Day 3 - Optimizing Python Code with Large Language Models GPT-4 vs Claude 3.5
7:50
Day 3 - Code Generation Pitfalls When Large Language Models Produce Errors
5:52
Day 3 - Blazing Fast Code Generation How Claude Outperforms Python by 13,000x
4:32
Day 3 - Building a Gradio UI for Code Generation with Large Language Models
4:06
Day 3 - Optimizing C++ Code Generation Comparing GPT and Claude Performance
7:22
Day 3 - Comparing GPT-4 and Claude for Code Generation Performance Benchmarks
4:44
Day 4 - Open Source LLMs for Code Generation Hugging Face Endpoints Explored
1:59
Day 4 - How to Use HuggingFace Inference Endpoints for Code Generation Models
8:20
Day 4 - Integrating Open-Source Models with Frontier LLMs for Code Generation
7:01
Day 4 - Comparing Code Generation GPT-4, Claude, and CodeQuen LLMs
10:33
Day 4 - Mastering Code Generation with LLMs Techniques and Model Selection
1:48
Day 5 - Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
9:11
Day 5 - Mastering LLM Code Generation Advanced Challenges for Python Developers
7:51

Day 1 - RAG Fundamentals Leveraging External Data to Improve LLM Responses
5:03
Day 1 - Building a DIY RAG System Implementing Retrieval-Augmented Generation
13:09
Day 1 - Understanding Vector Embeddings The Key to RAG and LLM Retrieval
9:30
Day 2 - Unveiling LangChain Simplify RAG Implementation for LLM Applications
3:45
Day 2 - LangChain Text Splitter Tutorial Optimizing Chunks for RAG Systems
10:59
Day 2 - Preparing for Vector Databases OpenAI Embeddings and Chroma in RAG
1:14
Day 3 - Mastering Vector Embeddings OpenAI and Chroma for LLM Engineering
4:14
Day 3 - Visualizing Embeddings Exploring Multi-Dimensional Space with t-SNE
17:41
Day 3 - Building RAG Pipelines From Vectors to Embeddings with LangChain
2:51
Day 4 - Implementing RAG Pipeline LLM, Retriever, and Memory in LangChain
4:37
Day 4 - Mastering Retrieval-Augmented Generation Hands-On LLM Integration
10:38
Day 4 - Master RAG Pipeline Building Efficient RAG Systems
1:57
Day 5 - Optimizing RAG Systems Troubleshooting and Fixing Common Problems
1:02
Day 5 - Switching Vector Stores FAISS vs Chroma in LangChain RAG Pipelines
8:40
Day 5 - Demystifying LangChain Behind-the-Scenes of RAG Pipeline Construction
3:15
Day 5 - Debugging RAG Optimizing Context Retrieval in LangChain
10:28
Day 5 - Build Your Personal AI Knowledge Worker RAG for Productivity Boost
7:07

Day 1 - Fine-Tuning Large Language Models From Inference to Training
10:17
Day 1 - Finding and Crafting Datasets for LLM Fine-Tuning Sources & Techniques
5:59
Day 1 - Data Curation Techniques for Fine-Tuning LLMs on Product Descriptions
12:35
Day 1 - Optimizing Training Data Scrubbing Techniques for LLM Fine-Tuning
15:46
Day 1 - Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
8:31
Day 2 - LLM Deployment Pipeline From Business Problem to Production Solution
7:42
Day 2 - Prompting, RAG, and Fine-Tuning When to Use Each Approach
9:55
Day 2 - Productionizing LLMs Best Practices for Deploying AI Models at Scale
1:49
Day 2 - Optimizing Large Datasets for Model Training Data Curation Strategies
9:50
Day 2 - How to Create a Balanced Dataset for LLM Training Curation Techniques
11:27
Day 2 - Finalizing Dataset Curation Analyzing Price-Description Correlations
12:03
Day 2 - How to Create and Upload a High-Quality Dataset on HuggingFace
1:39
Day 3 - Feature Engineering and Bag of Words Building ML Baselines for NLP
7:10
Day 3 - Baseline Models in ML Implementing Simple Prediction Functions
17:18
Day 3 Feature Engineering Techniques for Amazon Product Price Prediction Models
9:08
Day 3 - Optimizing LLM Performance Advanced Feature Engineering Strategies
7:21
Day 3 - Linear Regression for LLM Fine-Tuning Baseline Model Comparison
5:44
Day 3 - Bag of Words NLP Implementing Count Vectorizer for Text Analysis in ML
7:48
Day 3 - Support Vector Regression vs Random Forest Machine Learning Face-Off
6:03
Day 3 - Comparing Traditional ML Models From Random to Random Forest
4:13
Day 4 - Evaluating Frontier Models Comparing Performance to Baseline Frameworks
1:46
Day 4 - Human vs AI Evaluating Price Prediction Performance in Frontier Models
11:17
Day 4 - GPT-4o Mini Frontier AI Model Evaluation for Price Estimation Tasks
10:59
Day 4 - Comparing GPT-4 and Claude Model Performance in Price Prediction Tasks
9:00
Day 4 - Frontier AI Capabilities LLMs Outperforming Traditional ML Models
3:57
Day 5 - Fine-Tuning LLMs with OpenAI Preparing Data, Training, and Evaluation
6:06
Day 5 - How to Prepare JSONL Files for Fine-Tuning Large Language Models (LLMs)
11:06
Day 5 - Step-by-Step Guide Launching GPT Fine-Tuning Jobs with OpenAI API
8:18
Day 5 - Fine-Tuning LLMs Track Training Loss & Progress with Weights & Biases
4:22
Day 5 - Evaluating Fine-Tuned LLMs Metrics Analyzing Training & Validation Loss
7:18
Day 5 - LLM Fine-Tuning Challenges When Model Performance Doesn't Improve
1:52
Day 5 - Fine-Tuning Frontier LLMs Challenges & Best Practices for Optimization
11:07

Day 1 - Mastering Parameter-Efficient Fine-Tuning LoRa, QLoRA & Hyperparameters
3:55
Day 1 - Introduction to LoRA Adaptors Low-Rank Adaptation Explained
7:19
Day 1 - QLoRA Quantization for Efficient Fine-Tuning of Large Language Models
5:17
Day 1 - Optimizing LLMs R, Alpha, and Target Modules in QLoRA Fine-Tuning
5:14
Day 1 - Parameter-Efficient Fine-Tuning PEFT for LLMs with Hugging Face
8:40
Day 1 - How to Quantize LLMs Reducing Model Size with 8-bit Precision
3:35
Day 1 Double Quantization & NF4 Advanced Techniques for 4-Bit LLM Optimization
3:41
Day 1 - Exploring PEFT Models The Role of LoRA Adapters in LLM Fine-Tuning
7:10
Day 1 - Model Size Summary Comparing Quantized and Fine-Tuned Models
1:52
Day 2 - How to Choose the Best Base Model for Fine-Tuning Large Language Models
6:25
Day 2 - Selecting the Best Base Model Analyzing HuggingFace's LLM Leaderboard
6:49
Day 2 - Exploring Tokenizers Comparing LLAMA, QWEN, and Other LLM Models
5:01
Day 2 - Optimizing LLM Performance Loading and Tokenizing Llama 3.1 Base Model
9:20
Day 2 - Quantization Impact on LLMs Analyzing Performance Metrics and Errors
5:42
Day 2 - Comparing LLMs GPT-4 vs LLAMA 3.1 in Parameter-Efficient Tuning
3:17
Day 3 - QLoRA Hyperparameters Mastering Fine-Tuning for Large Language Models
8:57
Day 3 - Understanding Epochs and Batch Sizes in Model Training
4:27
Day 3 - Learning Rate, Gradient Accumulation, and Optimizers Explained
5:24
Day 3 - Setting Up the Training Process for Fine-Tuning
17:22
Day 3 - Configuring SFTTrainer for 4-Bit Quantized LoRA Fine-Tuning of LLMs
9:42
Day 3 - Fine-Tuning LLMs Launching the Training Process with QLoRA
3:42
Day 3 - Monitoring and Managing Training with Weights & Biases
2:09
Day 4 - Keeping Training Costs Low Efficient Fine-Tuning Strategies
1:50
Day 4 - Efficient Fine-Tuning Using Smaller Datasets for QLoRA Training
6:00
Day 4 - Visualizing LLM Fine-Tuning Progress with Weights and Biases Charts
14:52
Day 4 - Advanced Weights & Biases Tools and Model Saving on Hugging Face
11:55
Day 4 - End-to-End LLM Fine-Tuning From Problem Definition to Trained Model
2:04
Day 5 - The Four Steps in LLM Training From Forward Pass to Optimization
5:38
Day 5 - QLoRA Training Process Forward Pass, Backward Pass and Loss Calculation
6:52
Day 5 - Understanding Softmax and Cross-Entropy Loss in Model Training
7:13
Day 5 - Monitoring Fine-Tuning Weights & Biases for LLM Training Analysis
5:31
Day 5 - Revisiting the Podium Comparing Model Performance Metrics
3:08
Day 5 - Evaluation of our Proprietary, Fine-Tuned LLM against Business Metrics
12:58
Day 5 - Visualization of Results Did We Beat GPT-4
2:49
Day 5 - Hyperparameter Tuning for LLMs Improving Model Accuracy with PEFT
4:34

Day 1 - From Fine-Tuning to Multi-Agent Systems Next-Level LLM Engineering
3:21
Day 1 Building a Multi-Agent AI Architecture for Automated Deal Finding Systems
8:07
Day 1 - Unveiling Modal Deploying Serverless Models to the Cloud
4:21
Day 1 - LLAMA on the Cloud Running Large Models Efficiently
15:39
Day 1 - Building a Serverless AI Pricing API Step-by-Step Guide with Modal
18:39
Day 1 - Multiple Production Models Ahead Preparing for Advanced RAG Solutions
2:48
Day 2 - Implementing Agentic Workflows Frontier Models and Vector Stores in RAG
5:20
Day 2 - Building a Massive Chroma Vector Datastore for Advanced RAG Pipelines
9:23
Day 2 - Visualizing Vector Spaces Advanced RAG Techniques for Data Exploration
4:38
Day 2 - 3D Visualization Techniques for RAG Exploring Vector Embeddings
2:14
Day 2 - Finding Similar Products Building a RAG Pipeline without LangChain
7:38
Day 2 - RAG Pipeline Implementation Enhancing LLMs with Retrieval Techniques
7:07
Day 2 - Random Forest Regression Using Transformers & ML for Price Prediction
7:06
Day 2 - Building an Ensemble Model Combining LLM, RAG, and Random Forest
9:41
Day 2 - Wrap-Up Finalizing Multi-Agent Systems and RAG Integration
1:54
Day 3 - Enhancing AI Agents with Structured Outputs Pydantic & BaseModel Guide
4:12
Day 3 - Scraping RSS Feeds Building an AI-Powered Deal Selection System
11:48
Day 3 - Structured Outputs in AI Implementing GPT-4 for Detailed Deal Selection
10:27
Day 3 - Optimizing AI Workflows Refining Prompts for Accurate Price Recognition
2:41
Day 3 - Mastering Autonomous Agents Designing Multi-Agent AI Workflows
1:44
Day 4 - The 5 Hallmarks of Agentic AI Autonomy, Planning, and Memory
10:50
Day 4 - Building an Agentic AI System Integrating Pushover for Notifications
9:09
Day 4 Implementing Agentic AI Creating a Planning Agent for Automated Workflows
7:44
Day 4 - Building an Agent Framework Connecting LLMs and Python Code
14:14
Day 4 - Completing Agentic Workflows Scaling for Business Applications
4:49
Day 5 - Autonomous AI Agents Building Intelligent Systems Without Human Input
2:51
Day 5 - AI Agents with Gradio Advanced UI Techniques for Autonomous Systems
10:00
Day 5 - Finalizing the Gradio UI for Our Agentic AI Solution
7:31
Day 5 Enhancing AI Agent UI Gradio Integration for Real-Time Log Visualization
4:17
Day 5 - Analyzing Results Monitoring Agent Framework Performance
2:16
Day 5 - AI Project Retrospective 8-Week Journey to Becoming an LLM Engineer
6:01
Course preview
$49.99$9.99Save 80%
What's included
Certificate of completion

Create an account to start learning