Course by:

Course Highlights

  • Introduces students and young developers to the fundamentals of Open-Source Large Language Models (LLMs).
  • Covers key concepts including Transformer Architecture, Tokenization, Prompting, Ethical AI, and Agentic AI.
  • Provides a clear understanding of how open-source models like LLaMA, Mistral, and Falcon are developed and fine-tuned.
  • Emphasizes responsible and transparent AI development through model documentation and ethical practices.
  • Serves as a 15-hour self-paced foundational program under the YuvAi Initiative for Skilling and Capacity Building, by Meta, IndiaAI , and AICTE, and implemented by 1M1B.
  • Prepares learners for the 45-hour Advanced Applied LLM Module with project-based applications.
  • Skill Type

  • Course Duration

  • Domain

  • GOI Incentive applicable

  • Course Category

  • Nasscom Assessment

  • Placement Assistance

  • Certificate Earned

  • Content Alignment Type

  • NOS Details

  • Mode of Delivery

Course Details

Learning Objectives

What will you learn in the LLM for Young Developers: Foundational Course?

By the end of this course, learners will be able to:

  • Understand the core concepts, evolution, and ecosystem of open-source Large Language Models (LLMs).
  • Explain the transformer architecture, including attention mechanisms, positional encoding, and feed-forward layers.
  • Apply tokenization and classical Natural Language Processing (NLP) principles to understand how language is processed and represented by LLMs.
  • Utilise prompting techniques such as zero-shot, few-shot, and Chain-of-Thought (CoT) Prompting to guide AI responses effectively.
  • Identify ethical risks, biases, and limitations of LLMs, and understand frameworks for responsible AI use.
  • Recognise the fundamentals of Agentic AI and the significance of model documentation for safe and transparent AI development.
Read more
Reasons to enrol

Why should you take the LLM for Young Developers: Foundational Course?

  • Learn how Large Language Models like LLaMA, Mistral, and Falcon power modern generative AI tools.
  • Gain clarity on how LLMs are built, fine-tuned, and deployed within the open-source ecosystem.
  • Build a strong foundation in transformers, tokenization, prompting, and ethical AI — key skills for the future of AI development.
  • Understand the emerging domain of Agentic AI, where LLMs act autonomously for reasoning and task completion.
  • Prepare for the 45-hour Advanced Applied Module, which focuses on practical, project-based implementation of LLMs.
  • Earn a joint certificate under the YuvAI Initiative for Skilling and Capacity Building — by Meta, IndiaAI, and AICTE, implemented by 1M1B — recognized across academia and industry.
Read more
Ideal Participants

Who should take the LLM for Young Developers: Foundational Course?

  • Undergraduate and postgraduate students in Computer Science, Engineering, Data Science, or AI-related fields
  • Early-career developers looking to strengthen their understanding of AI model foundations
  • Faculty members or educators who wish to integrate AI and LLM fundamentals into academic teaching
  • Tech enthusiasts or learners seeking to transition into AI, NLP, or open-source model development
  • Anyone interested in understanding how Generative AI and LLMs work and how they are shaping the future of technology
Read more
Curriculum

Curriculum

The LLM for Young Developers: Foundational Course is a 15-hour self-paced learning programme structured into seven comprehensive modules, each covering essential aspects of open-source LLMs:

Introduction to Open-Source LLMs

  • Overview of LLMs, evolution of language models, open licencing models, community-driven development

Fundamentals of Transformer Architecture

  • Encoder-decoder structure, attention mechanisms, positional encoding, pretraining vs fine-tuning, reasoning processes

Tokenization and Classical NLP Foundations

  • WordPiece, SentencePiece, Byte Pair Encoding, and the evolution from rule-based NLP to neural models

Prompting Techniques and Ethical Risks

  • Zero-shot, one-shot, few-shot, and chain-of-thought prompting; understanding hallucination, bias, and responsible AI

Introduction to Agentic AI

  • Agentic LLMs, ReAct architecture, AutoGPT, BabyAGI, Long-context handling, and Tool integration

LLM Limitations and the Myth of Understanding

  • Understanding what models can and cannot do; hallucination, context drift, and interpretive limitations

Model Cards and AI Documentation

  • Understanding model documentation, intended use, risk disclosure, and transparency practices using Hugging Face standards

Each module includes interactive quizzes, and the course concludes with a final assessment.

Read more
skills and tools

Tools you will learn in the LLM for Young Developers: Foundational Course

Skills you will develop:

  • Understanding the architecture and working of Transformer-based models
  • Designing effective prompts using zero-shot, few-shot, and chain-of-thought techniques
  • Applying ethical AI frameworks for safe and responsible model usage
  • Evaluating LLM documentation and model cards for transparency and accountability
  • Exploring the fundamentals of Agentic AI systems and autonomous AI workflows

Tools and frameworks covered:

  • Hugging Face - for accessing, experimenting, and evaluating open-source LLMs
  • Google Colab/Jupyter Notebook - for hands-on exploration of AI concepts
  • Python libraries - NumPy, Pandas, Matplotlib (for basic data operations and visualisation)
  • Prompting Interfaces - ChatGPT Playground, Hugging Face Spaces, or Colab-based notebooks
  • Documentation tools - Hugging Face Model Cards and Open Model Reporting Standards
Read more