Benchmarking Storage Performance of Large Language Models Webinar | 24 January 2024
08 Jan 2024



Join our Scientific Machine Learning (SciML) group for this webinar, led by Jean-Thomas Acquaviva from DDN and Oana Balmau from McGill University.


​​Image portraying Large Language Models (LLMs) ​through an interconnection of various words​


On 24 January 2024 between 16:00 and 17:00, the Benchmarking Storage Performance of Large Language Models (LLMs) Webinar will take place. This webinar will be led by Jean-Thomas Acquaviva (DDN) and Oana Balmau (McGill), and will engage with various topics surrounding LLMs.

Reflecting on how data is the driving force behind machine learning (ML) algorithms and considering the impact and challenges of how data is ingested, stored, and served, the webinar will discuss key factors, such as workload type, software framework used (e.g., PyTorch, Tensorflow), accelerator type (e.g., GPU, TPU), dataset size to memory ratio, and degree of parallelism, and how a synthetic I/O workload generator can be built and integrated into MLPerf Storage, a new benchmark for storage in ML workloads.

There will be ample opportunity to exchange ideas and challenge thinking in this engaging webinar.​

Click here to register​ for the event and find out more!

Contact: Slingsby, Pam (STFC,RAL,SC)