| Page 490 | Kisaco Research

As billions of IoT devices permeates into our daily lives, their resource limitations and connectivity constraints have confined them to only a narrow subset of machine learning capabilities within the broader scope of AI. Edge Processing stands as a pivotal solution, optimizing large-scale data mining and aggregation by relocating the data processing segment of an application to more resourceful devices edge devices within the local network. The integration of GPU-accelerated ML inference on edge devices opens new avenues for harnessing the full potential of AI, fostering a future where intelligent decision-making in IoT is not only accessible via cloud. 

Mo will commence the talk by outlining the current landscape of IoT networks, emphasizing the increasing demand for intelligent decision-making at the edge. Traditional challenges associated with centralized cloud-based ML models will be highlighted, setting the stage for the exploration of decentralized solutions.

We will delve into the technical considerations for integrating GPUs, including model optimization, compatibility with popular ML frameworks, and the advantages of parallel processing. Real-world examples will be presented to showcase the transformative impact of GPU-accelerated ML inference on edge devices, enabling the deployment of pre-trained models using latest hardware without compromising performance. Practical considerations, such as power efficiency and scalability, will also be addressed to provide a comprehensive understanding of the benefits and challenges associated with this approach.

AI Infrastructure
Edge AI

Author:

Mo Haghighi

Distinguished Engineer
Discover Financial Services

Dr Mo Haghighi is a director of engineering/distinguished engineer at Discover Financial Services. His current focus is hybrid and multi-cloud strategy, application modernisation and automating application/workload migration across public and private clouds. Previously, he held various leadership positions as a program director at IBM, where he led Developer Ecosystem and Cloud Engineering teams in 27 countries across Europe, Middle East and Africa. Prior to IBM, he was a research scientist at Intel and an open source advocate at Sun Microsystems/Oracle. 

Mo obtained a PhD in computer science, and his primary areas of expertise are distributed and edge computing, cloud native, IoT and AI, with several publications and patents in those areas.

Mo is a regular keynote/speaker at major developer conferences including Devoxx, DevOpsCon, Java/Code One, Codemotion, DevRelCon, O’Reilly, The Next Web, DevNexus, IEEE/ACM, ODSC, AiWorld, CloudConf and Pycon. 

Mo Haghighi

Distinguished Engineer
Discover Financial Services

Dr Mo Haghighi is a director of engineering/distinguished engineer at Discover Financial Services. His current focus is hybrid and multi-cloud strategy, application modernisation and automating application/workload migration across public and private clouds. Previously, he held various leadership positions as a program director at IBM, where he led Developer Ecosystem and Cloud Engineering teams in 27 countries across Europe, Middle East and Africa. Prior to IBM, he was a research scientist at Intel and an open source advocate at Sun Microsystems/Oracle. 

Mo obtained a PhD in computer science, and his primary areas of expertise are distributed and edge computing, cloud native, IoT and AI, with several publications and patents in those areas.

Mo is a regular keynote/speaker at major developer conferences including Devoxx, DevOpsCon, Java/Code One, Codemotion, DevRelCon, O’Reilly, The Next Web, DevNexus, IEEE/ACM, ODSC, AiWorld, CloudConf and Pycon. 

AI Infrastructure
Generative AI
Hardware
Software
Moderator

Author:

Galina Sagan

Prinicpal
Hitachi Ventures

Galina Sagan is a Principal at Hitachi Ventures, a global venture fund (with >600m USD AUM) with Hitachi being a single LP. Galina specializes in Enterprise SaaS B2B and digital investments. Some of the relevant deals include Trustwise AI, Archetype AI, Rescale and WekaIO. Formerly a Senior Investment Associate at Speedinvest, a pan-European early stage fund with > €1 billion AUM.

Galina has been recognized as one of GCVs Rising Stars in 2022.

Galina Sagan

Prinicpal
Hitachi Ventures

Galina Sagan is a Principal at Hitachi Ventures, a global venture fund (with >600m USD AUM) with Hitachi being a single LP. Galina specializes in Enterprise SaaS B2B and digital investments. Some of the relevant deals include Trustwise AI, Archetype AI, Rescale and WekaIO. Formerly a Senior Investment Associate at Speedinvest, a pan-European early stage fund with > €1 billion AUM.

Galina has been recognized as one of GCVs Rising Stars in 2022.

Author:

Asim Shah

Generative AI Research Engineering Lead
Citi Bank

Asim Shah

Generative AI Research Engineering Lead
Citi Bank

Author:

Euan Wielewski

Machine Learning Lead
NatWest

Euan Wielewski

Machine Learning Lead
NatWest

Author:

Jürgen Weichenberger

VP of AI Strategy & Innovation
Schneider Electric

Jürgen Weichenberger

VP of AI Strategy & Innovation
Schneider Electric

Author:

Kamran Naqvi

Chief Network Architect EMEA
Broadcom

Kamran Naqvi

Chief Network Architect EMEA
Broadcom
 

Euan Wielewski

Machine Learning Lead
NatWest

Euan Wielewski

Machine Learning Lead
NatWest

Euan Wielewski

Machine Learning Lead
NatWest
AI Infrastructure
Hardware
Systems

Author:

Ola Tørudbakken,

Director AI Systems
Meta

Ola has 30 years of experience building distributed systems and high-performance networking across AI, HPC, Enterprise and Telco. 

Ola currently serves as Director of AI Systems at Meta. Previously Ola was SVP Systems at Graphcore, driving their 2nd generation AI systems. Ola came to Graphcore through the acquisition of Skala Technologies, an AI startup he co-founded. Prior to Skala, Ola worked as Chief Architect of Networking and Netra Servers at Oracle. Ola joined Oracle through the acquisition of Sun Microsystems. At Sun, Ola served as Distinguished Engineer, amongst many things responsible for the famous 3456-port Magnum Infiniband Switch now on display in the Computer Museum, Silicon Valley. Prior to Sun Ola was at Dolphin ICS, acquired by Sun Microsystems in 2000. Ola started his career as a research scientist at SINTEF, a Norwegian industrial research organization.

Ola is a recognized industry expert in distributed systems, holds over 48 patents, has published several papers in leading publications, and participated in numerous standardization bodies. Ola holds an MSc degree in Computer Science from University of Oslo in 1994

Ola Tørudbakken,

Director AI Systems
Meta

Ola has 30 years of experience building distributed systems and high-performance networking across AI, HPC, Enterprise and Telco. 

Ola currently serves as Director of AI Systems at Meta. Previously Ola was SVP Systems at Graphcore, driving their 2nd generation AI systems. Ola came to Graphcore through the acquisition of Skala Technologies, an AI startup he co-founded. Prior to Skala, Ola worked as Chief Architect of Networking and Netra Servers at Oracle. Ola joined Oracle through the acquisition of Sun Microsystems. At Sun, Ola served as Distinguished Engineer, amongst many things responsible for the famous 3456-port Magnum Infiniband Switch now on display in the Computer Museum, Silicon Valley. Prior to Sun Ola was at Dolphin ICS, acquired by Sun Microsystems in 2000. Ola started his career as a research scientist at SINTEF, a Norwegian industrial research organization.

Ola is a recognized industry expert in distributed systems, holds over 48 patents, has published several papers in leading publications, and participated in numerous standardization bodies. Ola holds an MSc degree in Computer Science from University of Oslo in 1994

AI Infrastructure
Generative AI
Hardware

Author:

Marc Tremblay

Technical Fellow & Corporate VP
Microsoft

Marc is a Distinguished Engineer and VP in the Office of the CTO (OCTO) at Microsoft. His current role is to drive the strategic and technical direction of the company on silicon and hardware systems from a cross-divisional standpoint. This includes Artificial Intelligence, from supercomputer to client devices to Xbox, etc., and general-purpose computing. Throughout his career, Marc has demonstrated a passion for translating high-level application requirements into optimizations up and down the stack, all the way to silicon. AI has been his focus for the past several years, but his interests also encompass accelerators for the cloud, scale-out systems, and process technology. He has given multiple keynotes on AI Hardware, published many papers on throughput computing, multi-cores, multithreading, transactional memory, speculative multi-threading, Java computing, etc. and he is an inventor of over 300 patents on those topics.

Prior to Microsoft, Marc was the CTO of Microelectronics at Sun Microsystems. As a Sun Fellow and SVP, he was responsible for the technical leadership of 1200 engineers. Throughout his career, he has started, architected, led, defined and shipped a variety of microprocessors such as superscalar RISC processors (UltraSPARC I/II), bytecode engines (picoJava), VLIW, media and Java-focused (MAJC), and the first processor to implement speculative multithreading and transactional memory (ROCK – first silicon). He received his M.S. and Ph.D. degrees in Computer Sciences from UCLA and his Physics Engineering degree from Laval University in Canada. Marc is on the board of directors of QuantalRF.

Marc Tremblay

Technical Fellow & Corporate VP
Microsoft

Marc is a Distinguished Engineer and VP in the Office of the CTO (OCTO) at Microsoft. His current role is to drive the strategic and technical direction of the company on silicon and hardware systems from a cross-divisional standpoint. This includes Artificial Intelligence, from supercomputer to client devices to Xbox, etc., and general-purpose computing. Throughout his career, Marc has demonstrated a passion for translating high-level application requirements into optimizations up and down the stack, all the way to silicon. AI has been his focus for the past several years, but his interests also encompass accelerators for the cloud, scale-out systems, and process technology. He has given multiple keynotes on AI Hardware, published many papers on throughput computing, multi-cores, multithreading, transactional memory, speculative multi-threading, Java computing, etc. and he is an inventor of over 300 patents on those topics.

Prior to Microsoft, Marc was the CTO of Microelectronics at Sun Microsystems. As a Sun Fellow and SVP, he was responsible for the technical leadership of 1200 engineers. Throughout his career, he has started, architected, led, defined and shipped a variety of microprocessors such as superscalar RISC processors (UltraSPARC I/II), bytecode engines (picoJava), VLIW, media and Java-focused (MAJC), and the first processor to implement speculative multithreading and transactional memory (ROCK – first silicon). He received his M.S. and Ph.D. degrees in Computer Sciences from UCLA and his Physics Engineering degree from Laval University in Canada. Marc is on the board of directors of QuantalRF.