When: October 30th 15:00-17:00
Where: MSCS 1015 (Behind the Computer Science reception)
This session is exclusively for Durham staff, as it is conducted under a non-disclosure agreement (NDA).
AI workloads are increasingly dominating data centers and our day-to-day scientific work across many disciplines. While public conversations focusing on the availability of AI compute resources seem to implicitly assume that there is “one an only” AI technology, the field is actually at a crossroads: Some major vendors advocate for extremely powerful, capable, and expensive general-compute hardware such as general-purpose GPGPUs. Simultaneously, a growing number of companies are highlighting the potential of bespoke, low-power AI chips custom-made for specific AI workloads and calculations as a driving force behind the scenes.
The Intel®Gaudi® AI accelerator is such a chip labelled by the vendor as “dedicated Deep Learning Training & Inference Processor”. It promises to be a cost-effective, high-performance alternative to comparable GPUs for both training and inference tasks. Indeed, it is already being utilised by companies like AWS and Supermicro to implement state-of-the-art AI workloads.
In our first IDAS vendor fireside chat of the academic year 2024/25, we are pleased to invite colleagues from Intel to discuss their roadmap and vision for the Gaudi chip. This session is exclusively for Durham staff, as it is conducted under a non-disclosure agreement (NDA). Our intention is to provide colleagues with the opportunity to challenge Intel's ambitions from the perspective of their research projects, identify potential use cases for this new technology in future grant applications, and establish new working relationships with colleagues on the vendor side.