“Implementing the Message Passing Interface (MPI) with FPGAs”
Starbridge Systems, Inc
Many modern FPGAs provide on-chip processing elements, implemented either as soft cores or embedded devices. But methods of inter-processor communication differ from vendor to vendor. They make few provisions for legacy code and lack the capabilities and robustness associated with traditional multi-processor interfaces. A full-featured protocol may not be necessary in all cases, but FPGA designers can make the best use of processing resources if a scalable, flexible architecture is available.
Developed in the early 1990s, the Message Passing Interface (MPI) remains one of the most popular standards for enabling communication across multiple CPUs. This is an open-source, C-based library that coordinates parallel computation through message passing. Starbridge Systems, Inc. is working toward implementing a subset of MPI’s capability within reconfigurable logic. This will be small enough to run on FPGA soft processor cores but powerful enough to provide reliable communication with features that can be added or removed according to need.
Combining MPI with FPGA cores provides greater speed and improved coordination of specialized processors. Because the processing elements are on-chip, the communication latency is dramatically lower than that of traditional Ethernet. Further, these soft processors can be configured to perform specific functions. Using MPI, a developer can break up a large computational task into subtasks and distribute them to these customized elements. In addition, the flexibility of FPGA memory circuitry allows for both shared-memory and distributed-memory operation.
The Abstract Data Interface (ADI) describes the high-level requirements for any device that intends to participate in MPI communication. The minimum subset of functions needed to meet these requirements are specified in the third version of the Channel Interface, called CH3. To investigate the combination of MPI and reconfigurable logic, Starbridge will implement CH3 functions using Xilinx’ customizable MicroBlaze processors. This paper will describe a baseline capability for multi-processor communication that can be extended with additional MPI capabilities.
2006 MAPLD International Conference Home Page