7:00 – 7:05 AM | Opening and Welcome |
7:05 – 7:25 AM | ONNX Community & LF AI Update ONNX Steering Committee |
7:25 – 7:35 AM | Extract the Maximum Benefits of ONNX to Shorten Your Development Cycle Time and Reduce Guesswork Patrick St-Amant, Zetane |
7:35 – 7:45 AM | ONNX at OneFlow Jianhao Zhang, OneFlow |
7:45 – 7:55 AM | Efficient Inference of Transformers Models: Collaboration Highlights Between Hugging Face & ONNX Runtime Morgan Funtowicz, Huggingface |
7:55 – 8:05 AM | Flows and Tools to Map ONNX Neural Networks on Micro-controllers Danilo Pau, ST Micro |
8:05 – 8:15 AM | Neural Automation: Fusion of Automation and Data Science Fabian Bause, Beckhoff Automation |
8:15 – 8:25 AM | ONNX Runtime Updates: Mobile, Quantization, Training, and More Faith Xu, Microsoft |
8:25 – 8:35 AM | Apache TVM and ONNX, What Can ONNX Do for DL Compilers (and vice versa) Tianqi Chen, OctoML |
8:35 – 8:45 AM | ONNX Support in the MLIR Compiler: Approach and Status Alexandre Eichenberger, IBM Research |
8:45 – 8:55 AM | Compiling Traditional ML Pipelines into Tensor Computations for Unified Machine Learning Prediction Serving Matteo Interlandi, Microsoft |
8:55 – 9:05 AM | Q/DQ is All You Need Neta Zmora, NVIDIA |
9:05 – 9:15 AM | Break |
9:15 – 9:25 AM | Architecture/Infrastructure SIG Update Ashwini Khade, Microsoft |
9:25 – 9:35 AM | Operators SIG Update Michał Karzyński, Intel and Emad Barsoum, Microsoft |
9:35 – 9:45 AM | Converters SIG Update Chin Huang, IBM and Guenther Schmuelling, Microsoft |
9:45 – 9:55 AM | Model Zoo/Tutorials SIG Update Wenbing Li and Vinitra Swamy, Microsoft |
9:55 – 10:00 AM | Q&A / Open Discussions |