FPGA-Enabled Optical Interconnects for Utility Computing and System Validation

Related Research

High-Performance Computing and Memory Interconnects Using Silicon Photonics

Photonic Networks for Hardware Accelerators

Optical Network Interface and Protocol Development


Due to the accelerated growth in performance of microprocessors and the recent emergence of chip multiprocessors (CMP), the critical performance bottleneck of high-performance computing systems has shifted from the processors to the communications infrastructure. By uniquely exploiting the parallelism and capacity of wavelength division multiplexing (WDM), optical interconnects offer a high-bandwidth, low-latency solution that can address the bandwidth scalability challenges of future computing systems.

High-Performance Computing and Memory Interconnects Using Silicon Photonics



The limitations of main memory accesses have become a bottleneck for the scaling of high-performance computing systems. Memory systems must balance requirements for large memory capacity, low access latency, high bandwidth, and low power. As such, the electronic interconnect between a processor and memory is a key consideration in overall system design. The current trend in memory system design is to access many memory devices in parallel and transmit data over a high-speed, path-length-matched, wide electronic bus. Overall, the increasing number of memory devices, higher bus data rates, and wiring constraints are pushing the limits of electronic interconnects and threatening the performance gains of future computing systems. Optically-connected memory systems can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. Memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency.


Our optically-connected memory system provides an all-optical link between processing cores and main memory across an optical interconnection network, such as the interconnect fabric system test-bed. By implementing the processing cores on FPGAs we can explore novel memory architectures and model various applications. We have the ability to perform in-depth architectural exploration and validation, and we are working to close the growing performance gap between processors and memory.


Only a portion of the processor-to-memory performance gap can be answered by simply inserting conventional telecommunications-grade optical components in current computing hierarchies. The silicon photonics platform promises to bring low-cost, low-energy, and small footprint optical interconnects to the forefront, resulting in tight integration between computational components and optical interconnects. Effective silicon photonic modulators, filters, and switches have been demonstrated at the device level for high-bandwidth and dynamic functionality; however, such devices lack unified and well-developed integration. By integrating control plane logic with device-level functionalities such as wavelength division multiplexing (WDM), wavelength routing, and broadband spatial switching, we can provide a set of optical network primitives and a new paradigm for delivering computational data and processor-to-memory transactions.


We are further investigating FPGA-enabled silicon photonic interconnects for computational and processor-to-memory interconnects, expanding on our work with OCM architectures. We have developed an FPGA-based optical network interface that can execute primitives compatible with OCM data transactions. In investigating such functionalities, we can demonstrate network transaction paradigms of OCM interconnections, paving the way for new silicon photonics enabled computing systems.


Related publications


Top

Photonic Networks for Hardware Accelerators in Utility Computing

All the the industry's biggest tech players are chasing the cloud-computing market these days, and for good reason, cloud computing is poised to a huge rewards field in the next few years. In other words, the paradigm shift from buying a single CPU computer to renting them has begun. This repackaging of computing services, known as utility computing, has become the foundation for the shift to "on demand" computing, sofware and gaming as a service and cloud computing models that continue propogating the idea of computing, applications, and network as a service. Heterogeous computing within these systems - the use of different hardware acclerators (x86 CPUs, FPGAs, GPUs) to coorperate on one computing task- is quickly on the rise. Current architectures however only allow for localized hardware accleration.


High-bandwidth connectivity provided by Wavelength Division Multiplexing (WDM) optical interconnects is an important enabler for delocalized hardware accelerators in heterogeneous utility computing. In this work, we build a testbed for optically connected hardware accelerator emulatiors on an FPGA-based system. We have the ability to perform in-depth architectural exploration and validation of the functionality and advantages of an optically connected heterogeous utility computing system.



Related publications


Top


Optical Network Interface and Protocol Development


While small-scale integrated silicon photonic devices have been demonstrated in recent years with high-bandwidth and energy-efficient performances, the benefits at the system level are yet to be realized. A systems-level implementation that aims to extract performance gains on real applications requires a completely programmable interconnection network. Maximal optical network utilization can be achieved by intelligently arbitrating wavelength routing and spatial circuit switching, which are key functionalities of silicon photonic architectures. A programmable interface must therefore implement all the required network protocols in such a way that they are compatible with protocols specific to operating silicon photonic devices. With programming methodologies available for silicon photonics devices that adhere to optimal network behaviors, these emerging devices can be architected to function and deliver their aforementioned performance benefits at the system level in a fashion that is fully integrated with system software.


Such tightly integrated hardware-software design is facilitated by our FPGA-based optical network interface (ONIC) logic. Historically, the "C" in ONIC stands for "card", but in recent work is deprecated as the ONIC has been expanded to operate on many different custom and commercial-off-the-shelf FPGA mainboards. Our ONIC employs logic to control wavelength stabilization and routing (with available digital-to-analog and analog-to-digital circuitry and closed-loop control), patterned data validation (PRBS and BERT) for established optical links via standard SerDes/electrical transceivers, and data delivery mechanisms coupled to JTAG- and Ethernet-connected software coprocessors. Our silicon photonic interconnection network testing and validation platform employs multiple FPGAs and ONICs for validation and verification of control-, network-, and application-dependent protocols. Application execution on our FPGA-enabled ONIC propagates from hardware-software co-implemented instruction-set architectures to an out-of-band control message passing interface that establishes different configurations among many interconnected FPGA-controlled silicon photonic devices in a best-effort, semi-simultaneous way.


Such a system allows for system-level validation and implementation of custom applications that can take advantage of optics-enabled network primitives. In this regard, application kernels cannot necessarily execute in an optimal way on an arbitrary optical interconnection network, so new methodologies must be developed to include optical network behaviors at the kernel and perhaps even instruction level.



Related publications


Top