Contact Us
ADD:Buliding CN,Industrial Zone of Jinghui,Longmu City,Pingyang Province, Vietnam
Email:sales@seener-tech.com
WEB:www.seener-tech.com
Home  >  News
The application trend of DSP in data center

The 100gb data center is already in use, and the next 400gb is expected to be ready for commercial use by 2020. For 400G applications, the biggest difference is the introduction of a new modulation format, pam-4, which has achieved the effect of doubling the transmission rate at the same baud rate (device bandwidth). For example, the single wave rate of DR4 for transmission below 500 meters needs to reach 100Gbps. In order to realize the application of this rate, the optical module of data center began to introduce DSP chip based on digital signal processing to replace the clock recovery chip in the past, in order to solve the sensitivity problem caused by insufficient optical device bandwidth. Can DSP become a broad solution for future data center applications as predicted by the industry? To answer this question, one must understand what the DSP can solve. What is the architecture; What is the development trend of future cost and power consumption?

Problem solved by DSP

In the field of physical layer transmission, DSP was first applied in wireless communication for three reasons. First, wireless spectrum is a scarce resource, and the demand for transmission rate is growing all the time. Improving spectral efficiency is the fundamental demand of wireless communication, so DSP must support various complex and efficient modulation methods. Secondly, the transmission equation of wireless channel is very complicated. Multi-path effect and doppler effect in high-speed motion cannot meet the compensation requirements of wireless channel by using traditional analog compensation, while DSP can make good use of various mathematical models to compensate the transmission equation of wireless channel. Third, the signal-to-noise ratio of wireless channels is often low, and error-correcting codes are needed to improve the sensitivity of the receiver.

In the field of optical communication, DSP is first used in the coherent transmission system with a distance of over 100G. Its reasons are similar to those of wireless communication. In the long-distance transmission, due to the high laying cost of optical fiber resources, it is an inevitable demand of operators to improve the spectrum efficiency to achieve a higher transmission rate on a single optical fiber. Therefore, after WDM technology is used, coherent technology supported by DSP becomes an inevitable choice. Secondly, in the long-distance coherent transmission system, the dispersion effect, the non-linear effect brought by the transmitting and receiving device and the optical fiber itself, as well as the phase noise introduced by the transmitting and receiving device, can be conveniently compensated by using a DSP chip, instead of placing the dispersion compensation optical fiber (DCF) in the link as in the past. Finally, in the long distance transmission, due to the damping effect of the fiber, general every 80 kilometers can use light amplification device (EDFA) to a signal amplification has reached thousands of kilometers of transmission distance, every amplification of signal is introduced into noise, reduce the SNR of signal, therefore, need to introduce in the process of long distance transmission error correction coding (FEC) improve the receiving ability of receiver.

To sum up, DSP solves three problems: first, it supports high order modulation format to improve spectral efficiency; Second, device and channel transfer effect; Third, signal to noise ratio problem. Then, whether there are similar requirements in the data center becomes an important basis for us to judge whether DSP should be introduced or not.

First of all, spectral efficiency, do you need to improve spectral efficiency within the data center? The answer is yes, but with the wireless spectrum resources shortage, optical fiber transmission systems resources shortage, internal data center to improve spectrum efficiency is the cause of electrical/optical devices insufficient bandwidth with wavelength division/parallel route (limited light module encapsulation volume), so we must rely on improving single-wave rate to meet the needs of future application more than 400 g. Second, for the application of single wave over 100G, the current sending end electric driver chip and optical devices cannot reach the bandwidth of more than 50GHz. Therefore, the sending end is equivalent to introducing digital signal processing unit. For internal application in the data center, the digital signal processing unit is relatively simple. For example, for 100G pam-4 applications, spectrum compression, nonlinear compensation, FEC coding (optional) of the transmitted signal are mainly completed at the beginning, and adaptive filter is used to compensate the signal after the receiver ADC, as well as CDR in the digital domain (independent external crystal oscillator support is required). In digital signal processing unit to compensate the signal usually use FIR filter, the number of the Tap FIR filter and the design of the decision function directly determines the performance and power consumption of the compensation DSP need special pointed out that the large number of parallel computing DSP application in the field of optical communication problems, the main reason is the ADC sampling frequency (dozens or even hundreds of Gs/s) and the working frequency of the digital circuit (~ several hundred MHz) of the huge differences between to support 100 Gs/s sampling rate of ADC, The digital circuit needs to convert the serial 100Gs/s signal into hundreds of parallel digital signals for processing. It can be imagined that, when FIR filter is only added 1 Tap design, the actual situation needs to add hundreds of Tap to realize, therefore, how to deal with the balance of performance and power consumption in the digital signal processing unit is the decisive factor that determines whether the DSP design is good or bad. Within the data center, and other light must satisfy the prerequisites of interoperability between modules, in practice, a link transmission performance depends on the sender DSP + DSP + / optical devices and the receiving end/optical devices, for the comprehensive performance of how to design the reasonable standard can correct evaluation of the sender and the receiver performance is also a difficulty. When DSP supports physical layer open FEC function, how to synchronously send and receive light module FEC function also increases the difficulty of data center test. Therefore, until now, the coherent transmission system is the inter-manufacturer equipment communication, and does not require different manufacturers to communicate. In 802.3, TDECQ performance evaluation method was proposed for pam-4.

3. Power consumption and cost

In terms of power consumption, the DAC/ADC and algorithm introduced by DSP must be higher than the traditional CDR chip based on analog technology, and the methods of reducing power consumption by DSP are relatively limited, which mainly rely on the improvement of flow technology. For example, upgrading from the current 16nm to 7nm technology can achieve a 65% reduction in power consumption. At present, the design power of 400G OSFP/ qsfp-dd based on 16nm DSP solution is about 12W, which is a great challenge for both the module itself and the thermal design of the front panel of the switch in the future. Therefore, it may be possible to solve the problem of 400G DSP based on 7nm process.

Price is always the topic of concern in data center. Different from traditional optical devices, DSP chip is based on mature semiconductor technology and can be expected to have a large space for chip cost reduction under the support of massive applications. Another advantage of DSP in data center applications in the future is the flexibility to adjust DSP configuration to meet the application requirements of different rates and scenarios with the same optical device configuration.