AI Deep Dive Stage

Technical immersions into advanced innovation topics:high-intensity, specialist sessions focused on vertical and in-depth subjects within artificial intelligence.
Note: The program is still being finalized and may be subject to changes.

AI Deep Dive

Technical immersions into advanced innovation topics:
high-intensity, specialist sessions focused on vertical and in-depth subjects within artificial intelligence.
21 JANUARY
22 JANUARY
21 jan 08:30
21 jan 13:30
22 jan 09:30
22 jan 13:30
21 jan 11:10 - 12:00
50 min
The talk explores AI safety challenges—hallucination and prompt injection—through multi-agent architectures and open-source standards. It presents the Open-Floor Protocol (Linux Foundation AI & Data) and new frameworks for hallucination and injection mitigation, using iterative review, fact-checking, and security KPIs to enhance reliability, transparency, and trust in generative AI systems.
21 jan 12:20 - 13:10
50 min
We are at a turning point in AI content generation. With the UNet era now behind us, the architectural debate between single-stream and dual-stream DiT models is reshaping the field. On one side, Qwen’s dual-stream MMDiT architecture—implemented in WAN Video—processes text, video, and spatial controls through separate streams that interact via joint attention, enabling granular control over camera, motion, and frame-by-frame editing. On the other side, Z-Image’s single-stream S3-DiT shows that the “scale-at-all-costs” paradigm can be challenged: just 6B parameters compared to the proprietary giants’ 20–80B, yet delivering competitive performance and sub-second inference on consumer GPUs. In this panel, we will explore key questions. Why does dual-stream MMDiT unlock motion-transfer and spatial-control capabilities that were previously unreachable? How does the single-stream S3-DiT achieve such high efficiency while maintaining SOTA quality? Big Tech vs. Open Source: who is actually winning when democratization through LoRA and quantization competes with pure computational power? And what limitations remain unsolved? Temporal consistency beyond 15 seconds, physics-aware generation, memory footprint, and energy efficiency are still open challenges for both approaches. A technical deep dive into how different architectural innovations are rewriting the rules of AI generation.
21 jan 14:30 - 15:20
50 min
The introduction of GraphRAG, hybrid RAG+KG systems, and vertical LLMs/SLMs for clinical domains brings new structural challenges: the lack of gender-disaggregated data, biased graph nodes, and embeddings that fail to represent minority populations. This session offers a technical discussion: How does dataset unevenness affect retrieval and reasoning? Which metrics are needed to evaluate gender misrepresentation in RAG systems? And how can we model dynamic, geolocated queries in low-information-density contexts? I will share insights from Geen, a GraphRAG system for medical triage, to explore failure cases, performance limitations, and retraining strategies. A conversation designed for engineers, developers, and knowledge-system architects.
21 jan 15:40 - 16:30
50 min
End-to-end testing of web applications is essential to ensure product quality and stability. Yet traditional approaches to automated E2E testing often clash with complex technologies, intricate user flows, long development timelines, and high maintenance overhead. But what if we could leverage the power of Large Language Models and AI agents to create automated tests that are robust, intelligent, and self-adapting? In this session, we explore the concrete technologies that enable the use of AI to automate E2E web testing. We’ll look at examples of intelligent agents capable of understanding interfaces, adapting to UI changes, and autonomously generating new test cases — ready to be unleashed into the testing environment in search of bugs.
22 jan 11:20 - 12:00
40 min
22 jan 12:20 - 13:10
50 min
AI is everywhere—but running it efficiently and responsibly in the cloud is a different challenge. As organizations race to deploy AI-powered applications, the energy and resource costs of training and serving models often remain an afterthought. In this talk, we will explore how cloud native technologies can unlock sustainable AI practices that go beyond the hype. We’ll discuss strategies for optimizing AI workloads using Kubernetes, serverless platforms, and cloud native observability, while keeping an eye on carbon impact and resource efficiency. Attendees will learn about scalable MLOps patterns, intelligent workload placement, and green computing techniques that reduce waste without compromising performance. Whether you are a platform engineer, cloud architect, or AI enthusiast, this session will equip you with practical approaches to make AI in the cloud efficient, cost-effective, and environmentally responsible—and finally move the conversation beyond buzzwords.
22 jan