Unlock the Future with Comprehensive AI Training
A cutting-edge program designed for Computer Science enthusiast to master modern AI, Generative AI, and Large Language Models.
Applications for this year are now closed. Please stay tuned for future updates.
Why Master Artificial Intelligence?
Artificial Intelligence is rapidly reshaping industries worldwide, creating unprecedented opportunities for innovation and career growth. As an Engineering student, gaining expertise in AI, especially in emerging areas like Generative AI and Large Language Models (LLMs), is crucial for staying at the forefront of technological advancements. Our program provides both the theoretical foundation and the hands-on experience needed to build, optimize, and deploy intelligent systems.
Prepare yourself for a future where AI skills are not just an advantage, but a necessity.
Our Curriculum
AI & GenAI Fundamentals
- What is AI: Core principles, history, and impact.
- What is GenAI: Capabilities and implications of Generative AI.
- Prompt Engineering Basics: Zero-shot, few-shot, and understanding roles
LLM Development & Optimization
- Using OpenAI / Gemini APIs with Python: Integrate powerful AI models.
- Local LLM Ollama & llama.cpp: Run and test LLMs locally.
- Quantization & Hyperparameters: Advanced optimization techniques.
- Quantizing LLM Models – Practical: Hands-on experience with optimization.
NLP & Advanced Architectures
- Intro-level NLP: Language models, tokens, and core concepts.
- Limitations of Standard LLMs: Understanding challenges like hallucinations.
- Motivation for RAG: Enhance LLM capabilities for factual accuracy.
- Tokenization & Embeddings: Deep dive into text representation.
- What is a Vector Database?: Role in modern AI systems.
- RAG Architecture: Retriever + Generator components.
- Real-world examples: Chatbots, legal research, search assistants.
Hands-on Projects & Practical Skills
API Integration & Local LLMs
- Build a Python script that accepts user input, sends it to OpenAI/Gemini API, and displays the response.
- Install Ollama, run `ollama run llama3` or `mistral`, and compare results of the same prompt on local vs. cloud (OpenAI).
- Run two different GGUF models via `llama.cpp` and compare how responses vary in tone/speed/accuracy.
LLM Performance & Customization
- Read and compare size & speed differences between models of different quantization levels (q2_k, q4_0, etc.) from Hugging Face, tabulating performance (run time, file size, accuracy).
- Modify a config file or command-line argument to tweak context window, threads, temperature, top-p, etc., observing and reporting effects on output.
- Convert a `.safetensors` model to `.gguf` using transformers, gguf-converter, and test it locally with `llama.cpp`.
NLP Fundamentals in Practice
- Tokenize a sentence using transformers tokenizer in Python and visualize token-to-ID mapping.
- Give an LLM a trick question (e.g., math puzzle or factually wrong statement), record and explain where and why it failed.
- Use `tiktoken` or transformers to count tokens for various input texts and analyze how different sentences affect token count.
Building RAG Systems
- Use `sentence-transformers` or OpenAI embeddings to: embed pre-written text chunks, and on user input, embed the query and compute similarity, returning the most relevant chunk along with the query to an LLM for a response.
- Create a chatbot that loads a text file, generates embeddings and stores them in Chroma or FAISS, accepts user queries, retrieves relevant text, and sends it to an LLM (OpenAI or Ollama) for a response.
- Build a simple RAG system using: Retriever (Chroma vector search) and Generator (OpenAI or local LLM), with input: Query → Retrieve top 2 relevant docs → Generate response.
- Choose one use-case (e.g., chatbot or search assistant), build a simple prototype using a PDF file as a knowledge base, and ask questions using a basic RAG pipeline.
Prerequisites
Basic Programming Proficiency (Python):
The projects heavily involve Python (e.g., “Build a Python script,” “Using OpenAI / Gemini APIs with Python,” “Tokenize a sentence using transformers tokenizer in Python”). Therefore, a solid understanding of Python fundamentals is essential.
Basic Understanding of Data Structures and Algorithms:
While not explicitly mentioned, these are foundational for any serious programming and would be beneficial for understanding how some of the AI concepts (like embeddings, vector databases) work under the hood.
Familiarity with Command Line/Terminal:
Projects mention running ollama run llama3, llama.cpp, and modifying config files, which often involves command-line interaction.
Enthusiasm for AI and Technology:
“Prepare yourself for a future where AI skills are not just an advantage, but a necessity” implies a motivated and forward-thinking student.
Educational Qualification
Any person who has completed +2. Any degree / post graduate who is passionate of coding.
FAQ
Is there any fee for the program?
No, this program is completely free of charge.
What is expected in terms of commitment?
Participants are expected to complete the program diligently. Those who do not may not be considered for future programs.
Is the training online or offline?
The training is 100% online. Any offline plan will be communicated in advance.
What is the style of training?
Self-learning with practical examples. Limited teaching / presentations except for first 3 sessions.
Will there be any evaluation?
No evaluation.
Date & Time
4-Jul-25 onwards.
Indian Standard Time 8:30 PM on Mon, Wed, Fri & 9:00 AM to 5:00 PM on Saturday (Weekend)
Duration?
3+ months.