[ad_1]
AIM, in partnership with NVIDIA, recently concluded an Agentic AI workshop. The session, titled ‘From Scratch to Solution: Multi-Agent AI for Complex Tasks’, was led by Shreyans Dhankhar, senior solution architect at NVIDIA. It offered participants a practical understanding of building sophisticated AI systems capable of autonomous task execution.
Dhankhar said that agents are fundamentally “software programs that can autonomously execute tasks for users or other applications”. The core component enabling this autonomy is the large language model (LLM), possessing capabilities for thinking, reflection, and tool utilisation.
He highlighted agents’ advantage in balancing latency and cost for enhanced task effectiveness, particularly in scenarios demanding complexity, reliability, and consistency.
Sharing an example, he said that agents can interact with data in natural language and can generate SQL queries, showcasing they empower non-experts to interact with data.
Building Blocks of Agentic Systems
The workshop progressed from foundational concepts to practical implementation. Dhankhar outlined different agentic patterns—starting with single-hop agents for simple tasks to multi-actor chain agents, where the output of one agent sequentially feeds into another, and hierarchical agent frameworks, featuring a supervisor agent delegating tasks to sub-agents. Moreover, the integration of human-in-the-loop capabilities was discussed, particularly in refining synthetic data generation.
A significant portion of the session was dedicated to a hands-on demonstration of building a simple ReAct (Reason, Act, Observe, Think) agent from scratch using basic Python. This exercise illustrated the core principles of agent operation: thinking about a problem, taking an action (using a defined tool or function), observing the result, and then iterating. The example used a memory model calculator to determine the video random access memory (VRAM) requirement for an LLM based on its size and precision.
Dhankhar underscored the importance of a well-defined prompt, acting as the “system prompt or system message” to guide the LLM’s behaviour.
The workshop showcased a multi-agent system example, simulating an e-commerce workflow. This involved multiple agents with specific functionalities—responsible for tasks like weather check, product search, and order processing—communicating sequentially to fulfil a user request. This demonstration highlighted message-based communication and stateful conversation management within a multi-agent architecture.
Advanced Applications: Synthetic Data Generation
The final hands-on exercise focused on building a system for supervised fine-tuning data set generation. Dhankhar presented a workflow where different agents collaborate to process content from a website, extract key concepts and relationships, generate questions, and design a schema for the data. An orchestrator agent managed the interaction between these specialised agents.
“This is some pipeline that can help you out in that,” Dhankhar explained, emphasising the relevance of synthetic data generation in scenarios where sufficient real-world data for fine-tuning is lacking. The demonstration involved processing a dummy website and generating question-answer pairs in a JSONL format, a common standard for training data. He also showed how the system could adapt to different input types, such as plain text files.
Essential Considerations and Resources
Dhankhar concluded by outlining crucial requirements for robust agentic systems, including parallel thread execution, scalability, load balancing, error handling, monitoring, version control, and security. He also provided an overview of the agentic AI ecosystem, mentioning orchestration frameworks like CrewAI, LlamaIndex, Autogen, and LangChain, as well as foundation models and vector databases.
Attendees were introduced to NVIDIA AI-Q, a toolkit for building and orchestrating agents across multiple frameworks. Dhankhar also highlighted free self-paced NVIDIA Deep Learning Institute (DLI) courses, /offering hands-on experience in various AI domains.
Click here to view and register for the courses.
He further pointed to NVIDIA developer tools and resources, including NVIDIA Inference Microservice (NIM) and NeMo, and encouraged participants to explore the on-demand sessions from GTC 2025 for deeper insights into AI.
In the Q&A session, Dhankhar addressed questions ranging from GPU requirements for large models to handling conflicting agent objectives and mitigating hallucinations. He stressed the context-dependent nature of many solutions and encouraged experimentation with the provided code and resources. The workshop served as a valuable introduction to the practical aspects of building multi-agent AI systems, empowering attendees to explore this transformative technology further.
[ad_2]
Source link













