Archive 08/18/2022

The network construction progress of the four major operators after the commercial use of 5G

In June 2019, the Ministry of Industry and Information Technology issued 5G commercial licenses to four communication operators, China Mobile, China Unicom, China Telecom and China Radio and Television, which launched the “starting gun” to promote 5G construction and services. Since then, my country has officially entered the 5G era. , 5G network construction has also entered the stage of large-scale deployment.

According to the calculation of the China Academy of Information and Communications Technology, 5G will drive China’s economic growth by 15.2 trillion yuan from 2021 to 2025. Among them, the investment in 5G network construction of the four major operators will account for nearly 3.3 trillion yuan. According to Miao Wei, Minister of Industry and Information Technology, on the 21st of this month, 113,000 5G base stations have been built nationwide, and it is expected to reach 130,000 by the end of this year.

At the World 5G Conference held recently, the progress of 5G network construction has attracted much attention. China Mobile, China Unicom, China Telecom and China Radio and Television also announced the latest 5G network construction progress.

  China Mobile

In October, China Mobile announced the first batch of 50 5G cities: including 4 municipalities and 46 cities. Up to now, China Mobile has built more than 40,000 5G base stations in more than 50 key cities across the country, carried out 5G network construction in more than 300 cities across the country, and completed 5G experience upgrades in 150 business halls in 100 cities. According to the latest data released by China Mobile, 5G commercial services will be provided in more than 50 cities in China in 2019, and more than 50,000 5G base stations will be built. So far, more than 5,000 5G base stations have been opened in Beijing alone.

By 2020, China Mobile will fully commercialize 5G in 340 cities across the country, which will be the largest 5G network in the world.

In the next step, China Mobile will implement the “5G+” plan in depth to fully release the amplifying, superimposing and multiplying effects of 5G on economic and social development.

Specifically, there are five major actions. The first is to build three major capabilities of cloud network integration, intelligent middle platform and security assurance, and deepen the intelligent upgrade of network capabilities.

The second is to enable industrial integration, drive factor integration, promote management integration, and help industrial transformation and upgrading.

The third is to enrich business rights, product forms and business models, and promote the upgrading of information consumption experience.

The fourth is to build service reputation, promote brand upgrade, strengthen joint promotion, and accelerate the upgrade of users’ full-scale service.

The fifth is to innovate the cooperation model, optimize the cooperation process, expand the cooperation boundary, and realize the ecological upgrade of open cooperation.

Facing the new opportunities brought by 5G, China Mobile will deeply implement the “5G+” plan, be the main force in building a network power, digital China, and a smart society, and strive to become the backbone of 5G development, so that 5G can truly become a social information flow. The aorta of the industry, an accelerator for industrial transformation and upgrading, and a new cornerstone for building a digital society.

  China Unicom + China Telecom

Based on various considerations, China Unicom and China Telecom are co-constructing and sharing 5G. According to the “5G Network Co-construction and Sharing Framework Cooperation Agreement” reached by the two parties, the two parties will jointly build and share the 3.5GHz 200MHz 5G frequency band (3400MHz-3600MHz) nationwide.

Through 5G co-construction and sharing, users will be able to enjoy 5G network services with doubled coverage, doubled speed, and doubled bandwidth. The peak network speed can reach 2.7Gbps, becoming the world’s highest 5G speed.

At present, China Unicom and China Telecom have more than 9,000 base stations.

In addition, according to Ma Hongbing, general manager of China Unicom’s operation department, in addition to 3.5GHz, we will also consider sharing 100MHz of 2.1GHz in the future, so as to achieve high, medium and low frequency coordination and provide better network coverage. Domain area can reduce coverage quantity and coverage cost with low frequency.

Judging from the progress of the current industry chain, the system equipment supporting 200MHz has gradually matured, but it is not fully in place on the terminal side. China Unicom and China Telecom are promoting the establishment of relevant standards in the standards organization. In terms of networking methods, China Unicom and China Telecom have also reached a consensus to jointly accelerate the maturity of SA (independent networking).

  China Radio and Television

What China Radio and Television will do after obtaining the 5G license is also one of the concerns of the market. According to Zhao Jingchun, chairman of China Radio and Television, China Radio and Television is currently the only radio and television operator in the world that has obtained a 5G license and built a network with 700MHz. China Radio and Television is taking the lead in formulating The international standard of 700MHz 5G large bandwidth, and at the same time, the independent networking route is directly adopted, and the network is constructed in a scientific way.

Previously, radio and television 5G had no terminal support. After joint efforts with industrial chain enterprises, the current industrial chain basically meets the needs of large-scale network construction;

China Radio and Television also announced the 5G timetable. China Radio and Television plans to start the official commercial use of radio and television 5G in 2020, and carry out personal user business and vertical industry business at the same time; strive to basically build radio and television 5G network into a positive energy, wide connection, everyone, new application, good service and controllable by 2021 new network.

Under the fierce competition of the four major operators, it is expected that China’s 5G users will exceed 200 million in 2020. At the same time, 5G will also promote the innovation and development of applications, such as high-definition video, AR/VR, cloud gaming, cloud computing, etc., bringing business innovation and growth opportunities. China covers 19 industries and more than 3,900 companies in cross-industry innovation based on 5G are leading the digitalization of global industries.

The Links:   MG50J1BS11 MG50Q1BS11

Teach you how to analyze the circuit diagram, usually there are the following 7 steps

Analysis of circuit diagrams should follow the ideas and methods from the whole to the part, from the input to the output, dividing the whole into zeros and gathering zeros into the whole. Use the principle of the whole machine to guide the analysis of the specific circuit, and use the specific circuit analysis to explain the working principle of the whole machine. This can usually be done by following the steps below.

Analysis of circuit diagrams should follow the ideas and methods from the whole to the part, from the input to the output, dividing the whole into zeros and gathering zeros into the whole. Use the principle of the whole machine to guide the analysis of the specific circuit, and use the specific circuit analysis to explain the working principle of the whole machine. This can usually be done by following the steps below.

Teach you how to analyze the circuit diagram, usually there are the following 7 steps

1. Clarifying the overall function of the circuit diagram and the main technical indicators The circuit diagram of the device is designed to complete and realize the overall function of the device. Clarifying the overall function and main technical indicators of the circuit diagram can give you a macroscopic understanding of the circuit diagram.

The overall function of the circuit diagram can generally be analyzed from the name of the device, and its function can be roughly known according to the name. For example, the function of the DC regulated power supply is to convert the AC power supply into a stable DC power output; the function of the infrared wireless headset is to The sound signal of the audio equipment is modulated and emitted on infrared rays, and then received and demodulated by the receiver and restored to a sound signal, which is played through headphones.

2. The flow and direction of signal processing in the judgment circuit diagram The circuit diagram is generally drawn in the order of the flow of the processed signal and according to certain customary rules. In general, the circuit diagram of the sub-paragraph should also be carried out according to the signal processing flow. Therefore, when analyzing a circuit diagram, it is necessary to clarify the signal processing flow and direction of the diagram.

3. The circuit diagram is decomposed into several units with the main components as the core. Except for some very simple circuits, most circuit diagrams are composed of several unit circuits. After mastering the overall 7105T1CWZQE function and signal processing flow direction of the circuit diagram, you will have a basic understanding of the circuit as a whole, but in order to deeply analyze the working principle of the circuit, the complex circuit diagram must be decomposed into unit circuits with different functions.

Generally speaking, in analog circuits, transistors and integrated circuits are the core components of each unit circuit; in digital circuits, microprocessors are generally the core components of unit circuits. Therefore, the core components can be used as symbols, and the circuit diagram can be decomposed into several unit circuits according to the signal processing flow and direction.

4. Analyzing the basic functions of the main channel circuit and its interface relationship Simple circuit diagrams generally have only one signal channel. More complex circuit diagrams often have several signal channels, including a main channel and several auxiliary channels. The basic function of the circuit of the whole machine is realized by each unit circuit of the main channel. Therefore, when analyzing the circuit diagram, you should first analyze the function of each unit circuit of the main channel and the interface relationship between each unit circuit.

5. Analysis of the function of the auxiliary circuit and its relationship with the main circuit The role of the auxiliary circuit is to improve the performance of the basic circuit and add auxiliary functions. After understanding the basic function and principle of the main channel circuit, you can analyze the function of the auxiliary circuit and its relationship with the main circuit.

6. Analysis of the DC power supply circuit The DC power supply of the whole machine is a battery or a rectified regulated power supply. Usually, the power supply is arranged on the right side of the circuit diagram, and the current supply circuit is arranged in the direction from right to left.

7. Detailed analysis of the working principle of each unit circuit On the basis of the overall analysis of the circuit diagram above, you can analyze each unit circuit in detail, clarify its working principle and the role of each component, and calculate or calculate technical indicators.

The Links:   FP35R12KT4 DMF-50174ZNF-FW

2 indicators that should be paid attention to when choosing a clock generator

System designers often focus on selecting the most appropriate data converter for the application, and often give less consideration to the selection of the clock-generating device that provides the input to the data converter. However, if the phase noise and jitter performance of the clock generator are not carefully considered, the data converter dynamic range and linearity performance can be severely affected.

System designers often focus on selecting the most appropriate data converter for the application, and often give less consideration to the selection of the clock-generating device that provides the input to the data converter. However, if the phase noise and jitter performance of the clock generator are not carefully considered, the data converter dynamic range and linearity performance can be severely affected.

System Considerations

A typical LTE (Long Term Evolution) base station using a MIMO (Multiple Input Multiple Output) architecture is shown in Figure 1, which consists of multiple transmitters, receivers, and DPD (Digital Predistortion) feedback paths. Various transmitter/receiver components such as data converters (ADC/DAC) and local oscillators (LOs) require a low-jitter reference clock to improve performance. Other baseband components also require clock sources of various frequencies.

2 indicators that should be paid attention to when choosing a clock generator
Figure 1. Clock timing solution for a typical LTE base station with MIMO architecture

The clock source used to achieve synchronization between base stations typically comes from a GPS (Global Positioning System) or CPRI (Common Public Radio Interface) link. This source generally has excellent long-term frequency stability; however, it requires frequency translation to the desired local reference frequency for good short-term stability or jitter. High-performance clock generators perform frequency translation operations and provide low-jitter clock signals that can potentially be distributed to various base station components. Choosing the best clock generator is critical because a poor reference clock increases LO phase noise, which in turn increases transmit/receive EVM (error vector magnitude) and system SNR (signal-to-noise ratio). High clock jitter and noise floor also affect data converters, as it reduces system SNR and causes data converter spurious emissions, further reducing the data converter’s SFDR (spurious free dynamic range). As a result, low-performance clock sources ultimately reduce system capacity and throughput.

Clock Generator Specifications

Although there are various definitions of clock jitter, in data converter applications, the most appropriate definition is phase jitter, which is measured in time domain ps rms or fs rms. Phase jitter (PJBW) is the jitter derived from the integration of the phase noise of the clock signal over a specific offset range of the carrier. The formula is as follows:

2 indicators that should be paid attention to when choosing a clock generator

fCLK is the operating frequency; fMIN/fMAX is the target bandwidth, and S(fCLK) is the SSB phase noise. The upper and lower limits of the integration bandwidth (fMIN/fMAX) are application specific and depend on the relevant spectral components to which the design is sensitive. The designer’s goal is to select a clock generator with the lowest integrated noise or the lowest phase jitter in the desired bandwidth. Traditionally, clock generators have been characterized at 12kHz to 20MHz integration, which is also a specified requirement for optical communication interfaces such as SONET. While this may work for some data converter applications, capturing the associated noise profile of a high-speed data converter sampling clock typically requires a wider integrated spectrum, specifically above 20MHz. When measuring phase noise, the noise is far away from the carrier frequency.

For example, the actual clock frequency used for data converter sampling is generally referred to as being far away from the carrier phase noise. The limit on this noise is often referred to as the phase noise floor, as shown in Figure 2. This figure shows an actual measurement of the ADI HMC1032LP6GE clock generator. The phase noise floor is particularly important in data converter applications because the converter SNR is extremely sensitive to broadband noise at its clock input. When designers evaluate clock generator options, phase noise floor performance must be a key benchmark.

2 indicators that should be paid attention to when choosing a clock generator
Figure 2. Phase noise and jitter performance of the HMC1032LP6GE

In Figure 2, operating at ~160MHz, the integrated phase jitter is ~112fs rms, the integration bandwidth is 12 kHz to 20MHz, and the phase noise floor is ~C168dBc/Hz. It is worth noting here that when choosing the most appropriate clock generator for a data converter, the designer must refer not only to frequency domain phase noise measurements, but also to time domain clock signal quality measurements such as duty cycle , rise/fall time.

Data Converter Performance

To describe the effect of clock noise on the performance of a data converter, consider the converter as a digital mixer with only one slight difference. In the mixer, the phase noise of the LO will be added to the signal being mixed. In a data converter, the phase noise of the clock will be added to the converted output, but is suppressed by the ratio of the signal to clock frequency. Clock jitter can cause sampling time errors that manifest as SNR degradation. (Time jitter (T jitter) is the rms error in the sampling time, in seconds)

In some applications, clock filters may be used to reduce jitter on the clock signal, but this approach has significant drawbacks:

While the filter may remove the wideband noise of the clock signal, the narrowband noise remains the same.
The output of the filter is usually a slow slew rate similar to a sine wave, which affects the susceptibility of the clock signal to noise inside the clock path.
The filter removes the flexibility to change the clock frequency to implement multiple sample rate architectures.

A more practical approach is to maximize the slope of the clock signal with a low noise clock driver with fast slew rate and high output drive capability. This approach can optimize performance for the following reasons:

Eliminating the clock filter reduces design complexity and component count.
The fast rise time suppresses noise inside the ADC clock path.
Both narrowband and wideband noise can be optimized by choosing the best clock source.
Programmable clock generators enable different sampling rates, increasing the flexibility of the solution for different applications.

Ultra-low clock noise floor is critical. Clock jitter noise far from the carrier is sampled in the ADC and superimposed into the ADC’s digital output band. This band is limited by the Nyquist frequency, which is defined as:

2 indicators that should be paid attention to when choosing a clock generator

Clock jitter is typically dominated by the wideband white noise floor of the ADC clock signal. While the SNR performance of an ADC depends on a variety of factors, the effect of wideband jitter on the clock signal is determined by:

2 indicators that should be paid attention to when choosing a clock generator

As shown in the above equation, unlike mixers, the SNR contribution of clock jitter is proportional to the ADC analog input frequency (fIN).

When driving an ADC, clock noise is limited by the bandwidth in the clock driver path, typically dominated by the ADC clock input capacitance. Broadband clock noise modulates the larger input signal and adds to the ADC output spectrum. Phase noise in the clock path degrades output SNR performance proportional to the amplitude and frequency of the input signal. In the worst case, there is a large high frequency signal in the presence of a small signal.

In modern radio communication systems, it is often the case that multiple carrier signals are present at the input and each target signal is then filtered in the DSP to match the signal bandwidth. In many cases, large unwanted signals at one frequency can mix with clock noise, reducing the available SNR at other frequencies in the ADC passband as a result. In this case, the target SNR is the SNR in the desired signal bandwidth. Also, the SNRJITTER values ​​above are actually relative to the amplitude of the largest signal (usually an unwanted or blocking signal).

The output noise in the desired target signal band depends on:

For a given input frequency, calculate the degradation in ADC performance with clock noise and large unwanted signals; for example, calculate the SNR over the full bandwidth of the ADC.
Calculate the SNR in the desired signal bandwidth as the ratio of the desired signal bandwidth to the full bandwidth of the data converter.
Increase this value based on the magnitude of the unwanted signal below full scale.

The result of step b is simply to correct the SNR equation shown earlier in the following way:

2 indicators that should be paid attention to when choosing a clock generator

SNRJITTER: SNR contribution of clock jitter in bandwidth fBW in the presence of a large signal of frequency fin and sampling rate fs.

fIN: The input frequency of the full-scale unwanted signal, in Hz.
TJITTER: The input jitter of the ADC clock, in seconds.
fBW: Bandwidth of desired output signal in Hz.
fs: The sampling rate of the data converter, in Hz.
SNRDC: SNR of the data converter at DC input, in dB

Finally, in the presence of a full-scale blocking signal, the maximum usable SNR in the signal band of interest is simply the sum of the jitter and the DC-contributed noise power.

For example, for a 500MSPS data converter with an ENOB of 12.5 bits (DC) or an SNR of 75dB, the evaluation is performed at 250MHz in a bandwidth equal to half the sampling rate. If the bandwidth of the signal of interest is 5 MHz, the possible SNR at near DC (5MHz bandwidth, perfect clock) is 75+10×log10(250/5)=92 dB.

However, the ADC clock is not perfect; according to Figure 3, the degraded effect in the desired signal bandwidth of 5MHz is a function of the large unwanted signal input at the x-axis frequency. The effect of unwanted signals becomes more severe as jitter increases, and the same is true as input frequency increases. If the magnitude of the unwanted signal decreases, the available SNR will increase proportionally.

2 indicators that should be paid attention to when choosing a clock generator
Figure 3. ADC SNR vs. Clock Jitter and Input Frequency

For example, if a full-scale 5MHz unwanted W-CDMA signal is sampled at a 200MHz input, using a high-quality 500MHz clock (such as the HMC1034LP6GE), and operating in integer mode with 70 fs jitter, the SNR is about 91dB. Conversely, if the clock jitter is reduced to 500fs, the same data converter and signal will only exhibit an SNR of 81dB, which equates to a 10dB drop in performance.

Feeding the same signal into the data converter at 400MHz, a clock of 70fs produces an SNR of 88dB. Similarly, at a clock of 500fs, the SNR value drops to only 75dB.

In summary, choosing the right components for clock generation and data conversion allows you to get the best performance out of a given architecture. Important criteria to consider when selecting a clock generator are phase jitter and phase noise floor, which affect the SNR of the data converter being driven. For the selected clock generator, its low phase noise floor and low integrated phase jitter characteristics help minimize the degradation of SNR performance at higher ADC input frequencies in multicarrier applications.

The Links:   MG500Q1US1 LM150X08-A4KD

Chinese pilots battle AI opponents in simulated dogfights

Chinese pilots battle AI opponents in simulated dogfights

A recent report by Chinese state media said that in simulated dogfights, Chinese Air Force pilots lost considerable time in front of AI-powered adversaries. This sounds reminiscent of the very public results of last year’s DARPA’s AlphaDogfight trial, which has since been used for more advanced demonstrations. It also highlights the PLA’s growing interest and investment in the development of advanced AI and machine learning technologies in general.

Earlier this week, Chinese state media Global Times reported on the People’s Liberation Army Air Force (PLAAF) fighting “artificial intelligence aircraft” in a simulator, citing another report in the PLA Daily over the weekend.

Chinese pilots battle AI opponents in simulated dogfights

Xinhua News Agency: The Chinese People’s Liberation Army Air Force pilots were assigned to the Bayi Aerobatic Team.

“The AI ​​demonstrated adept flight control skills and accurate tactical decision-making, making it a valuable adversary for honing our capabilities,” identified the commander of an unspecified PLA Air Force brigade assigned to the PLA’s Central Theater Air Force. Du Jianfeng told PLA Daily, according to the Global Times.

The Global Times report also said that AI has reportedly been used in simulator training for “years” and that it is able to “learn from pilots because it collects data from each training session”. Therefore, “At first, it was not difficult to defeat AI. But by studying the data, every engagement became an opportunity for it to improve,” Fang Guoyu, the brigade leader of Du’s unit, also told the PLA Daily. In a recent real-world air combat exercise, Fang Guoyu was further identified as the top performer.

“Fang Guoyu used AI [原文如此] A well-thought-out strategy eventually defeated it by a narrow margin, but in the ensuing session, AI Fang Guoyu used the same strategy to defeat him,” the Global Times article continued.

Of course, it’s worth noting that this is all according to Chinese state media. Regardless of how long it’s been in Du’s brigade, it’s unclear how widely the PLAAF will use the technology in simulated training or any other application, and how aggressively they might pursue its continued development.

At the same time, it is interesting to note that the PLA Daily itself, not to mention the Global Times, which is affiliated with the Communist Party’s official newspaper People’s Daily, has chosen to highlight an apparently top PLA Air Force pilot with advanced AI in simulated battles, No matter how accurate the description of the capabilities of this simulated opponent is.

Nonetheless, the types of technologies and their capabilities described in the Global Times story are hardly beyond the possible scope of the state of AI technology known to the public. In fact, we were told that the simulated AI-powered adversary the PLA Air Force uses to train its pilots, and Fang Guoyu’s specific experience, sounded very similar to the above-mentioned public broadcast of the AlphaDogfight Trials, a report conducted last year by the U.S. Defense Advanced Research Projects Agency (DARPA). DARPA) led initiative.

At the conclusion of the three-day event in August 2020, a U.S. Air Force F-16 fighter pilot from the District of Columbia Air National Guard was defeated five times in a row in one-on-one mock air combat. Notably, pilots and experts subsequently raised questions about the validity and applicability of the results of these trials to real-world air combat discussions.

“This (the ability of artificial intelligence and its learning ability) forces pilots to develop more and more innovative tactics and make breakthroughs to win these simulations,” according to the Global Times. “Simulation training can improve training efficiency, save costs, and reduce flight risks. With the rapid development of science and technology, the use of simulation training has become the common goal of the world’s major military powers.”

Broadly speaking, these are potential benefits of integrating artificial intelligence and machine learning into military training regimens. For example, the U.S. military is also increasingly exploring the use of artificial intelligence and machine learning techniques to help improve and save on air combat and other types of training. A particularly notable example is the work of a company called Red 6 in partnership with the U.S. Air Force to develop an augmented reality system that enables pilots of real-world jets to face fully virtual adversaries. You can read more about Red 6 developments that the company hopes will apply to future ground training as well in these past Warzone articles.

Chinese pilots battle AI opponents in simulated dogfights

A U.S. Air Force pilot wears the Red 6’s augmented reality headset on his helmet

In addition to using the technology in training, the report illustrates the PLA’s growing interest in AI for wider applications and what China has already done in this area. Algorithms that can “try first”, virtual adversaries on simulators are likely to be stepping stones to human beings able to operate all levels of real-world unmanned platforms, including fully autonomous unmanned combat air vehicles (UCAVs), another area of ​​development , in which China will continue to make great strides. The Global Times article pointed out that artificial intelligence and machine learning technologies can be applied to manned aircraft to improve efficiency and reduce workload, including helping decision-making in actual combat.

Again, the PLA is not alone in all this. The AlphaDogfight trial is an adjacent effort of a program called Air Combat Evolution (ACE), which more broadly describes the dogfighting capability of autonomous unmanned aircraft as “a gateway to non-linear combat autonomy.” In March, DARPA announced that AI-controlled simulated F-16s were working in pairs in a virtual training environment and hoped to demonstrate the technology on a real small drone later this year. The goal is to integrate the technology into an improved full-size jet trainer by 2023.

Chinese pilots battle AI opponents in simulated dogfights

A DARPA briefing slide showing visually how DARPA envisions the Air Combat Evolution program leading to more advanced air combat autonomous development

The U.S. Air Force is also advancing its Skyborg program, which is developing an artificial intelligence-driven system that it hopes will be able to operate “loyal wingman” drones that work with manned platforms and UCAVs. Some of the technology can also be applied to manned aircraft. An initial version of Skyborg’s “computer brain” underwent its first flight test earlier this year.

Separately, the U.S. Air Force has been preparing a planned demonstration, currently expected in 2024, that could see manned fighter jets engage in real-life dogfights with autonomous drones. This is just some of the work being done around AI and machine learning across the U.S. military and similar developments are taking place in other militaries around the world, as well.

In the United States, interest in these technologies has been particularly high in recent years, due in large part to developments in China. A report released earlier this year by the U.S. government’s National Security Council on Artificial Intelligence put it bluntly that, for now, “the United States is not prepared to defend or compete in the age of artificial intelligence.”

“China’s plans, resources and progress should be of concern to all Americans,” it added. “It’s an AI peer in many fields and an AI leader in some applications.”

So while it’s hard to say what the exact capabilities of the AI ​​that PLAAF pilots are training in simulators might be, it mirrors developments elsewhere, including in the United States. It also underscores the significant investments the Chinese military has made in the field, striving to become a world leader in the application of artificial intelligence technology.

The Links:   BSM75GB60DLC LQ9D011K AUO LCD

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

According to the statistics of the market research institute SEMI, in 2020, mainland China will become the world’s largest semiconductor equipment regional market for the first time, with an annual increase of 39% in sales, reaching US$18.72 billion; Taiwan, China ranks second, and equipment manufacturers will have sales in Taiwan in 2020. Reached 17.15 billion US dollars; South Korea ranked third, sales increased by 61% to 16.08 billion US dollars; Japan ranked fourth, with sales of 7.58 billion US dollars. East Asia has become the height of the global semiconductor arms race, with a total of 59.53 billion US dollars in 2020, accounting for 83.6% of the total global semiconductor equipment expenditure (71.2 billion US dollars) that year.

Due to the intensified impact of geopolitics on the semiconductor supply chain, the competition for investment in semiconductor production capacity in various regional markets will intensify in 2021. Both TSMC and Samsung have adjusted their capital expenditure plans on semiconductors to more than $30 billion in 2021. The two are desperately fighting for the progress of advanced process mass production. Both expect mass production of 3-nanometer process in 2022. , TSMC and Samsung will also go to the United States to set up factories. There is no doubt that 2021 will be another harvest year for semiconductor equipment manufacturers, and due to relevant restrictions, mainland China cannot purchase equipment such as high-end EUV lithography machines, and it may be difficult to hold the throne of the largest equipment market.

From the perspective of the Taiwan Institute of Industrial Technology (hereinafter referred to as the Taiwan Institute of Industrial Technology), different regional markets have different development goals for the semiconductor industry: as the world’s largest semiconductor application market, mainland China, under the background of threats to supply chain security, undoubtedly hopes to speed up Technological catch-up to alleviate the current situation of being controlled by people in key links such as manufacturing, equipment and materials; and the United States, as the leader of the global semiconductor industry chain, will continue to strengthen the export control of high-end chip manufacturing equipment to mainland China, and the United States will also introduce new policies In order to cope with the decline of wafer manufacturing capacity in the United States in recent years; as the two regions with the highest wafer manufacturing density in the world, Taiwan and South Korea will hopefully continue to lead the development trend of wafer manufacturing processes, and these two regions will also rely on strong Manufacturing capacity, improving upstream material and equipment autonomy.

Development direction of advanced technology

After 28 nanometers, the Planar transistor process reaches its limit, and FinFET (fin transistor) continues Moore’s law and pushes the process node to the current 10 nanometers. However, the FinFET route is also approaching its limit. Samsung will take the lead in adopting the GAA (full surround gate) structure on the 3nm process, while TSMC will continue the FinFET structure at the 3nm node, and then adopt the GAA structure at the 2nm node. Taiwan Industrial Research Institute believes that Intel has encountered troubles in wafer manufacturing, and the 7-nanometer process (note: the three companies have different definitions of process size, it is not suitable for direct comparison with numbers) or will be delayed until 2023. The 5nm node will also be changed to GAA architecture.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Source: Taiwan Institute of Industrial Technology

GAA stands for Gate-All-Around, which is a surround gate technology transistor, also called GAAFET. Its concept was put forward very early, and Dr. Cor Claeys of Belgian IMEC and his research team put forward in their article in 1990. GAAFET is equivalent to an improved version of 3D FinFET. The transistor structure under this technology has changed again. The gate and drain are no longer like fins, but become “little sticks” that pass vertically through the gate. In this way, the gate can realize the four-sided wrapping of the source and drain.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Source: Taiwan Institute of Industrial Technology

Compared with FinFET, the original source-drain semiconductor is a fin (Fin), and now the gate becomes a fin. So GAAFET and 3D FinFET have many similarities in implementation principles and ideas – which is a great advantage for fabs. From the three-contact surface to the four-contact surface, and it is also divided into several four-contact surfaces, it is obvious that the control force of the gate on the current is further improved.

Compared with the FinFET process, the GAA structure has a larger gate contact area, which improves the transistor’s ability to control the conduction channel, and significantly improves parasitic parameters such as capacitance, so it can reduce the operating Voltage, reduce leakage current, and reduce power consumption and operation. temperature, which is conducive to improving the degree of integration to continue Moore’s Law.

Since the production process required for the new structure is similar to that of fin transistors, the key process steps are almost the same, and existing equipment and technical achievements can continue to be used. For TSMC and Samsung, this is undoubtedly the cheapest technology route replacement solution. However, GAA’s requirements for processing accuracy are further improved, and it requires area selective deposition technology and atomic-level processing capabilities. Therefore, the importance of materials engineering will increase, and it will also drive more business opportunities for deposition and etching equipment.

The semiconductor front-end equipment market controlled by the five giants

Although more than 80% of the world’s equipment will be sold to East Asia in 2020, except for Japan, China (mainland plus Taiwan) and South Korea have little influence in the equipment industry. The world’s top five semiconductor front-end (ie wafer manufacturing, packaging as the back-end) equipment manufacturers Applied Materials (AMAT), ASML (ASML), Lam (Lam), Tokyo Electronics (TEL) and Kelei (KLA) occupy The market share exceeds 70%, of which only Tokyo Electronics is headquartered in East Asia, and the rest are American and European companies.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Source: Taiwan Institute of Industrial Technology

Specifically, Applied Materials ranks first. Its 2020 revenue is $17.2 billion, of which the semiconductor business accounts for about 70%. Applied Materials has a very wide distribution in semiconductor equipment, of which PVD (thin film deposition) equipment accounts for 38% of the world, CMP (grinding and polishing) equipment accounts for 70%, etching equipment accounts for 15%, and ion implanters account for 67%.

ASML ranked second, with revenue of 13.98 billion euros in 2020. ASML is a major manufacturer of lithography machines, and is currently the only supplier in the world for EUV (extreme ultraviolet) lithography machines. Thanks to the pursuit of advanced technology by TSMC, Samsung and Intel, ASML will sell 31% of its sales in 2020. 31 EUV lithography machines alone account for 32% of its total revenue. At present, ASML is working with supply chain partners to jointly develop and promote finer lithography processing technology. For example, in 2020, multiple electron beam inspection scanning systems will be launched, and the interference between electron beams and electron beams will be limited to less than 2%. Processes above the 5nm node. ASML has also cooperated with Pan-Lin and imec to develop dry photoresist technology to improve EUV resolution and reduce the amount of photoresist. And cooperate with Lasertec to develop a new generation of EUV mask inspection technology to reduce costs, and cooperate with TSMC to develop a new generation of EUV mask cleaning technology to reduce costs.

Tokyo Electron’s equipment sales in 2020 were US$10.37 billion, ranking third. The market share of Tokyo Electron’s glue coating and developing equipment accounts for 91% of the global market share, of which EUV glue coating and developing machines occupy the entire market share exclusively, etching machines account for 25% of the global market, deposition equipment 37%, and cleaning equipment 27%. From 2020 to 2022, Tokyo Electronics plans to invest 400 billion yen in research and development funds, focusing on technologies such as selective deposition, intelligent etching, and supercritical fluid cleaning.

Fanlin Semiconductor’s revenue in 2020 was US$10.05 billion, ranking fourth. Fanlin is the global leader in etching equipment, with memory manufacturing business accounting for 57% and logic process accounting for 43%. It is the only company among the top five equipment manufacturers with memory business accounting for over 50%. As mentioned earlier, Pan-Lin and ASML are developing EUV dry photoresist technology with imec.

Kelei Semiconductor’s revenue in 2020 was US$5.81 billion, ranking fifth. Kelei is the world leader in wafer inspection equipment, with a market share of more than 50%. Ke Lei has also invested in stress deformation measurement and multi-beam detection, and has also invested in the development of electron beam defect measurement technology for structures smaller than 5 nanometers.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Source: Taiwan Institute of Industrial Technology

According to the estimates of the Taiwan Industrial Research Institute, the global key semiconductor equipment will grow very well in the next few years. Except for DUV (deep ultraviolet) lithography equipment, the rest of the equipment will show a positive growth trend. Among them, ALD (atomic layer deposition) has the highest growth rate. It is expected that the average annual growth rate will reach 26.3% from 2020 to 2025. The EUV equipment growth rate ranks second, and the average annual growth rate will also reach double digits. In terms of amount, EUV equipment takes the lead. It is expected that the sales of EUV equipment will reach US$12.55 billion in 2025, surpassing the etching machine and becoming the semiconductor equipment with the highest sales in each subdivision.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Source: Taiwan Institute of Industrial Technology

Chinese equipment manufacturers have a long way to go

Compared with international manufacturers, there is a big gap between Chinese equipment manufacturers in terms of sales scale and technology. According to the data of the Electronic Special Equipment Industry Association, the sales of domestic semiconductor equipment in 2019 was 16.182 billion yuan, only about half of the revenue of the top five gatekeepers in 2019 (4.6 billion US dollars). The association estimated in October 2020, In 2020, the total sales of domestic semiconductor equipment will reach 21.3 billion yuan, which is still less than 60% of that of Kelei ($5.8 billion).

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Comparison of major semiconductor equipment companies at home and abroad (as of January 6, 2021)

Source: Debon Securities

Technically, my country’s semiconductor equipment has basically not participated in the research and development stage of advanced processes (3 nanometers and below), and currently it cannot support quasi-mainstream processes such as 28 nanometers to achieve national production. Most domestic equipment manufacturers can now commercialize products. It is still mainly based on mature process production lines. Taking the lithography machine with high social attention as an example, the main domestic company is Shanghai Microelectronics Equipment Co., Ltd. (“Shanghai Microelectronics”), and the mainstream products of Shanghai Microelectronics can only meet the lithography process of 90 nanometers and 110 nanometers. process requirements.

But there are also some fields that have already participated in the advanced technology competition. For example, China Microelectronics has been recognized by TSMC in the field of CCP etching and entered its 7nm/5nm production line; North Huachuang is relatively good in ICP etching equipment, and etching equipment above 28nm has been industrialized. In terms of advanced manufacturing process, silicon etching equipment has broken through 14nm technology and entered the Shanghai IC R&D Center.

According to the “Made in China 2025” goal, the core basic components and key basic materials of semiconductors should achieve a 40% autonomy rate in 2020, and a 70% localization rate in 2025. However, according to the public bidding information of ten domestic wafer manufacturing companies such as Yangtze Memory and Huali Microelectronics from 2017 to the first quarter of 2021, the localization rate is still far from the target. From 2017 to 2019, these ten fabs opened bids for a total of 4,197 sets of equipment, of which 431 sets were made in China, and the localization rate was about 10.3%; while from 2020 to the first quarter of 2021, these ten fabs opened bids for a total of 1,862 sets of equipment Equipment, of which 315 are domestically produced equipment, and the current localization rate is estimated to be 17%, an increase of more than 6 percentage points from 2017 to 2019.

Semiconductor front-end equipment market: five giants control, Chinese manufacturers have a long way to go

Left: Localization rate of top ten fab equipment in 2017-2019

Right: Localization rate of top ten fab equipment in the first quarter of 2020-2021

Source: Debon Securities

But domestic semiconductor equipment has also shown very positive changes in the past few years. In terms of technology, companies led by China Micro Semiconductor, North Huachuang and Yitang Semiconductor have approached world-class manufacturers in the fields of etching, deposition, dry debonding, cleaning, and ion implantation. In terms of market share, the market share of some products has exceeded 20%, which has launched a strong impact on the market position of international semiconductor equipment manufacturers.

It can be seen from the bidding information announced by the top ten fabs that from 2017 to the first quarter of 2021, the localization rate of dry degumming equipment reached 45.5%, while cleaning equipment (30.6%) and etching equipment (22.2%), polishing equipment (21.6%) localization rates were higher than 20%, furnace tube equipment (14.7%) and glue developing equipment (10.0%) were higher than 10%, The localization rate of deposition equipment (8.5%) and front-end inspection equipment (5.2%) is relatively low, only between 5% and 10%, and the largest gap is the ion implanter (2.4%). , Back-channel test equipment (1.9%) and lithography machine (1.6%).

After entering 2020, the localization rate will increase faster. Judging from the bid opening data of the top ten fabs, in the first quarter of 2020 to 2021, the localization rate of polishing equipment, etching equipment, furnace tube equipment and glue developing equipment increased by more than two places compared with the localization rate of the 2017-2019 stage number.

From the perspective of the development trend of the global semiconductor equipment industry, as the semiconductor process approaches the physical limit, it becomes more and more difficult to develop a new generation of equipment. Taking the EUV lithography machine as an example, the project was established as early as the 1990s, and nearly 200 research institutions in more than 40 countries around the world (more than 100 in Europe) participated in it, from basic research, technical research to system integration, The entire R&D system has invested more than 100 billion yuan, exceeding the total revenue of my country’s equipment industry in the past ten years.

In an interview with the media, Yin Zhiyao, chairman of China Micro Semiconductor, also said that semiconductors have entered the level of atomic-level processing. Manufacturing semiconductor devices at this precision requires the collection of knowledge and technologies from more than 50 disciplines. He pointed out that plasma etching has carved pores with a diameter of one thousandth to one ten thousandth of a human hair, and the accuracy, uniformity and repeatability of the hole diameter can reach tens of thousands to ten thousandths of the diameter of a human hair. One in ten thousand. An etching machine processes more than one million trillion thin and deep contact holes every year, and almost 100% of the holes are fully opened.

Semiconductor equipment not only needs to be able to achieve very fine processing, the most important thing is uniformity, stability, repeatability, reliability and cleanliness. Only by doing these can we meet the core requirement of wafer manufacturing, the chip pass rate. To ensure an acceptable pass rate (such as 90%), it is required to have a very high pass rate in each processing and manufacturing process, because the current advanced process chip requires 1000 process steps, if the pass rate of each step is 99.9%, Then the final pass rate after 1000 steps is only 36.77%.

As a representative of modern intelligent manufacturing and the underlying support of the global information industry, chip manufacturing seldom gives new equipment manufacturers a chance for trial and error. After a lot of hard work, the prototype has only completed a small part of the development and production equipment. It is the more difficult part to convince the wafer manufacturers to take the risk of reducing the pass rate and production capacity to help the test run. Yin Zhiyao said that after the prototype is made, Customers are willing to cooperate, and they must pass at least more than 80 rigorous testing items before they can finally meet the requirements of the fab.

Judging from the historical experience of the development of international manufacturers, the equipment industry must stand firm in the market and must not make any rash advances. It must make long-term R&D investment and consolidate the technical foundation. As Yin Zhiyao said, semiconductor equipment requires 50 disciplines to collaborate, which is no less difficult than two bombs and one satellite. “Maglev molecular pump, one of the key core components of the semiconductor industry, can only be done by two or three companies in the world, and every thing requires long-term technical accumulation.”

Thanks to globalization, the semiconductor industry has developed to its current scale, but since the Trump administration came to power, the United States has used the semiconductor industry as a weapon to launch a technological war against China, which has seriously affected the supply chain previously established by the global electronic information industry. mutual trust mechanism. As the world’s most important electronic information industry manufacturing base, it is crucial to create a new semiconductor supply chain that is not affected by geopolitics, and whenever a technology truly achieves a real market breakthrough (for example, the market share reaches 20%), “Watts” The corresponding restriction items in the Senna Agreement also lose their meaning. At present, whether the localization rate of equipment can be effectively improved has become a key factor for the healthy development of the global electronic information industry. China’s rapid expansion in wafer manufacturing has provided a broad market space and trial and error opportunities for domestic semiconductor equipment manufacturers. For domestic semiconductor equipment manufacturers, this is an excellent historical opportunity.

The Links:   CM100TU-12F BCM62B215 BSM300GB120DLC

Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

As 5G and IoT interconnected devices and the associated high bandwidth requirements are expected to rise significantly, data center operators will need to migrate to higher bandwidth networks that exceed the 100GB Ethernet (100GE) commonly used today. Migrating to next-generation 400GE networks requires faster memory and higher-speed serial bus communications. In addition to upgrading the Ethernet interface to 400GE, the server also needs to use a higher-speed serial expansion bus interface and memory.

The PCIe (PCI Express) expansion bus is now migrating to the latest standardized PCIe 5.0, also known as PCIe Gen 5. At the same time, DDR (Double Data Rate) memory is also migrating from DDR 4.0 to DDR ≈ 5.0. The PCIe Gen 5 specification is a fast-moving enhancement to the PCIe 4.0 standard developed by the PCI-SIG. PCI-SIG is a standards body that defines all PCIe specifications. With the finalization of the PCIe 5.0 Plug-In Electromechanical (CEM) specification, the PCIe 5.0 standard was recently completed and released in June 2021, a companion piece to the existing PCIe 5.0 base (silicon) specification released in 2019.

The evolution of the PCIe standard doubles the transmission speed

The original parallel PCI bus was introduced in 1992 to expand the capabilities of personal computers, allowing the addition of graphics and network cards and many other peripherals. PCIe is a high-speed serial bus designed to replace PCI and other existing legacy interfaces such as PCI-X (PCI eXtended) and AGP (Accelerated Graphics Port). PCIe not only has high throughput, but also is small in size, and the link width can be expanded between ×1, ×2, ×4, ×8, and ×16. PCIe is based on a point-to-point bus topology between the root complex (system/host) and endpoints (plug-ins), supporting full-duplex packet-based communication.


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

PCIe Duplex Link Communication

The PCIe 1.0 standard came out in 2003 and provided a rate of 2.5G transfers/second (2.5GT/s). PCIe currently provides a rate of 2.5GT/s~32GT/s. PCIe 5.0 doubled PCIe 4.0 transfer rates from 16GT/s to 32GT/s, but didn’t offer any new additions, as the goal at the time was to provide additional speed in the shortest amount of time.

All PCIe standards released today employ non-return-to-zero (NRZ) signaling. However, the PCI-SIG is currently developing the PCIe Gen 6 specification, which will again double the transfer rate to 64GT/s, migrating away from NRZ signaling. The Gen 6 sixth-generation specification will use PAM-4 signaling and low-latency FEC (forward error correction) technology to improve data integrity.

All PCIe standards must be backward compatible, that is, PCIe 5.0 (32GT/s maximum data rate) must also support 2.5GT/s, 5GT/s, 8GT/s, 16GT/s, and 32GT/s.

Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

PCIe Specification Timeline

Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

PCIe lanes and link speeds

PCIe compliance testing with specific challenges

PCI-SIG is the developer of non-proprietary PCI technology standards and related specifications, and PCIe is now the de facto standard for servers. The PCI-SIG defines the PCI specification to support the required I/O functions while being backward compatible with previous specifications. To enable adoption of PCI technology across the industry, the PCI-SIG supports both interoperability and compliance testing, including the tests that must be performed and passed to achieve compliance.

PCI-SIG allows members to conduct interoperability testing against other member products and test suites, and participating products either pass or fail the test. To pass formal conformance testing, products must pass at least 80% of interoperability tests and pass all standard conformance tests.

PCIe 5.0 faces specific challenges. With a maximum data rate of 16GT/s, PCIe 4.0 is a speed-enhancing specification for the previous generation of PCIe and has proven to be more difficult to implement than previous standards. In PCIe 5.0, both computer PCIe lanes and motherboards face significant challenges as they deal with 32GT/s data rates. In addition to the challenges encountered at lower data rates, PCIe 5.0 devices are expected to experience significant signal integrity challenges. Tektronix has PCI-SIG approved test suites for all data rates (Tx, Rx and PLL bandwidth).


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

Tektronix PCIe Gen 5 Tx Compliance Test Solutions

Tektronix is ​​a major contributor to the PCI-SIG, has made significant contributions to the PCIe 4.0 and 5.0 physical layer test specifications, and has done extensive road-seeking experiments to define PCIe 6.0 Tx/Rx measurement methods. Tektronix also played a key role in conformance and interoperability testing during PCIe standard development and implementation.

For PCIe 5.0 transmitter testing, proper test equipment and automation software are essential

When developing a PCIe Gen 5 transmitter device, either at the base (chip) level or at the CEM (system and plug-in) level, chip-level verification (usually performed by PHYIP companies) and pre-compliance testing will be required before the Devices are submitted to PCI-SIG for formal compliance testing. Therefore, obtaining appropriate test equipment and associated automation software is critical.

PCIe compliance testing includes:

Electrical Test – Evaluation Platform, Plug-In Transmitter (Tx) and Receiver (Rx) Features

Configuration Testing – Evaluating Configuration Space in PCIe Devices

Link Protocol Test – Evaluate the device’s link-level protocol characteristics

Transaction Protocol Test – Evaluate the device’s transaction-level protocol features

Platform BIOS Test – Evaluates the ability of the BIOS to identify and configure PCIe devices

In terms of electrical testing, it is divided into two sets of measurements, one at the basic level and one at the CEM level. These tests are further divided into standard tests and reference tests:


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

PCIe Basic and CEM Compliance Measurements


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification


Both types of measurements require a high-bandwidth real-time oscilloscope capable of capturing data waveforms. Post-processing techniques are then employed to make the corresponding Voltage and timing measurements required in the base specification and the CEM specification. Uncorrelated jitter examines the jitter inherent in the system after removing packet and channel intersymbol interference (ISI). In addition to jitter, the oscilloscope makes eye height and eye width measurements. A number of “conformance test patterns” are specified in the base specification. A waveform record containing multiple occurrences of the entire compliance test pattern is recommended to construct a representative eye diagram.

In basic Tx testing of the device, the specification states that measurements are made directly at the pins of the transmitter. If direct access is not possible, test points should be as close as possible to the device pins. If the user has a good understanding of the S-parameters, any splice channel loss can be de-embedded by physically reproducing the channel or emulation. Starting with the 4.0 specification, another de-embedding technique is described, applying CTLE (Continuous Time Linear Equalization) to uncorrelated jitter measurements during waveform post-processing, which effectively eliminates ISI up to the pin.


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

Tx equalizer presets

Any PCIe 5.0 product submitted for PCI-SIG certification must successfully pass compliance testing using the specified Tx equalizer settings presets, supporting speeds from 2.5GT/s up to 32GT/s. These presets are used to equalize intersymbol interference caused by frequency-dependent attenuation differences within the code stream, improving signal integrity. Each preset is a specific combination of undershoot (before the cursor) and de-emphasis (after the cursor).

There are various specific implementations that allow the DUT transmitter to scan through various data rates and TxEQ presets. However, the base specification specifies a common method in which a 100MHz clock burst is delivered to lane 0 of the receiver. This can be done automatically using an arbitrary function generator (AFG).

For PCIe links with a maximum rate of 32GT/s, there are new verification challenges for base clocks (Refclks). The base specification has expanded the jitter limit in proportion to the data rate, but Gen 5 has lowered the limit disproportionately to 150fs. This high-frequency jitter measurement requires proper application of the common clock transfer function and consideration of worst-case transfer delays. This latest version of the specification also pushes measurements from a basic level specification (chip level) to a CEM specification requirement (appearance level), which must meet compliance testing.


Addressing compliance test specific challenges requiring robust PCIe 5.0 transmitter verification

CEM plug-in PCIe 5.0 conformance test and automatic preset switching

Tektronix PCIe solutions for more confidence in compliance testing

Oscilloscope bandwidth and sample rate requirements. For the basic Tx test, each PCIe 5.0 lane runs at 16GHz (because two bits can be sent in one cycle), and the third harmonic reaches 48GHz. Since there is not much valid signal information above the third harmonic, PCIe 5.0 basic Tx testing only requires a real-time oscilloscope with 50GHz bandwidth. For CEM Tx testing, measurements are made near the end of the worst-case channel, reducing high frequency content and requiring a 33GHz bandwidth. To ensure adequate waveform post-processing (SigTest), a minimum of 4 points per unit interval is required, and CEM allows up to 2xsinx/x interpolation, so the minimum sampling rate is 128GS/s.

Automatic conformance testing. In conformance testing, performing analysis manually is time-consuming and error-prone. To save time, it is best to use automated software, which not only reduces the workload, but also speeds up compliance testing. For electrical verification, PCI-SIG offers SigTest offline analysis software, which performs analysis using data acquired by an oscilloscope. The automation software also controls the device under test (DUT), using an arbitrary function generator as the pattern source, allowing the DUT to automatically pass through the various speeds, de-emphasis, and presets required for compliance testing.

A complete round of compliance testing requires the acquisition of multiple waveforms per channel at different DUT settings. This set of waveforms will be increased by the number of channels (up to 16) that need to be analyzed. The ability of software to manage and store data for analysis and future reference requirements is an important metric for any compliance test solution. The automation software can also adjust the oscilloscope horizontal and vertical settings and acquisition. In addition to configuration and analysis, multiple acquired waveforms can be managed using automation software.

The automation software can select the data rate, voltage swing, presets, and tests to perform. It can also provide options to embed package parametric models, de-embed cables, test fixtures, or other elements required to reach the target test points specified by the specification. Analysis results from the software can often be compiled into a report in PDF or HTML format, which can include pass/fail test summaries, eye diagrams, setup configurations, and user notes.

Using the Tektronix DPO70000SX Series oscilloscope and AFG31252 arbitrary function generator, the PCIExpressGen1/2/3/4/5 solution automates transmitter verification and compliance testing at the basic (chip) and CEM (system and plug-in) levels.

TekExpressPCIe5.0Tx automatic software function:

Auto-step the DUT through different tempo, pattern and Tx EQ presets

Verify that the signal is correct at the transmitter before taking measurements

Perform channel and packet embedding and de-embedding

Support SigTest and SigTest Phoenix software and template files

100MHz Reference Clock Jitter and Signal Integrity Measurements Using Silicon Labs. “PCIe Clock Jitter Tool” and Tektronix DPOJET Software

Historically, when a new generation of PCIe devices entered compliance testing, a significant portion of the devices would fail the first interoperability workshop when they were tested for PHY and link training compliance. Before the PCI-SIG workshop, it is critical to ensure that a complete oscilloscope, AFG, BERT (for Rx testing) and automation software solutions are in place. Tektronix PCIe Test and Debug Tx, Refclk and Rx solutions guide you through compliance testing and debugging prior to interoperability testing, ensuring your designs meet PCI-SIG PCIe standards with confidence.

The Links:   NL8060BC31-27 LP150X12-TL01

High voltage/high power wafer testing is done!

ERS Electronic, headquartered in Munich, Germany, has been focusing on providing temperature testing solutions for more than 50 years. The company has won a good reputation in the industry, especially the chuck system that uses air as the coolant and the temperature changes quickly and accurately. -65°C to 550°C – Analytical, parametric and manufacturing tests are performed over a wide temperature range. At present, AC3, AirCool? PRIME, AirCool? series chucks developed by ERS are used in various large-scale wafer probe test benches in the semiconductor industry.

The high Voltage/current market is one of the fastest growing areas in semiconductor applications today. Driven by the development of electrification and renewable energy, a new generation of power electronic devices is also driving the development of MOSFETs and IGBTs, which is also driving the increasing popularity of other new substrates such as silicon carbide (SiC) and gallium nitride (GaN). .

Gallium Nitride GaN, one of the High Electron Mobility Transistors (HEMTs). Compared with other silicon-based MOSFET devices, it offers significant advantages in large band gaps, extremely short switching times, higher power density and breakdown voltage, and better thermal conductivity. Today, it is widely used in power electronics, such as mobile phone fast chargers, military radar equipment, high-speed rail transit, and 5G networks.

Like GaN, SiC also has high voltage and high temperature characteristics, but due to its varistor characteristics, it performs better than GaN at 600V or higher. As the first successfully commercialized wide-bandgap semiconductor SiC, since it was applied to Tesla cars in 2018, hybrid and electric vehicles and charging stations have become the main driving force for the rapid growth of the SiC power semiconductor market.

As SiC/GaN device technology matures and its cost continues to decrease, SiC/GaN devices are expected to accelerate penetration. According to Yole’s forecast, the market size of SiC and GaN power electronic devices will grow to US$1.4 billion and US$370 million, respectively, in 2023, with a market penetration rate of 3.75% and 1%, respectively. Under the strong demand for 5G macro base station construction and national defense construction, the cost of superimposed GaN radio frequency devices is declining, and the demand is expected to increase rapidly. According to Yole data, the demand for GaN radio frequency devices will reach 194.3 million in 2023, 19-23 The annual CAGR reached 85.8%.

Challenges to the wafer probe industry

Under the background of rapid development of science and technology, the world of power electronics has always been regarded as a relatively conservative field. The emergence of SiC and GaN not only opens a new chapter in the power market, but also brings new challenges to the wafer testing industry. How to maintain the stability and accuracy of the temperature to ensure the yield while meeting the needs of high-power testing is a problem that every design engineer needs to consider.

One of the most important equipment in the semiconductor testing process is the probe station. During the testing process, the wafer is transported to the temperature chuck, so that the dies on the wafer are sequentially contacted with the probe and tested one by one. After testing, the probe station records the chips whose parameters do not meet the requirements, and rejects them before entering the subsequent process flow. In this process, the temperature chuck, which is in close contact with the wafer, plays a decisive role in the test environment.

Compared with the common test environment, in addition to ensuring a wide test temperature range, uniformity and stability of the chuck temperature, etc., the special test environment of high voltage/current also adds more new challenges to the design of the chuck. For example, in a high current test environment, a large contact resistance will accelerate the temperature rise of the wafer. If the heat cannot be dissipated in time, the wafer will be in danger of being damaged. Therefore, when designing a temperature chuck, it is necessary to first Consider how to minimize contact resistance to ensure accurate measurement of RDS(on). Secondly, it is also necessary to ensure low leakage in a high-voltage environment to avoid breakdown. In addition, the chuck also needs to be flexible to deal with some special types of wafers, such as thin wafers, Taiko, etc.

ERS Solutions – High Voltage/Current Temperature Chuck

ERS electronic, which has accumulated more than 15 years of experience in high-voltage testing and ultra-low-noise wafer probe testing, has designed a high-voltage/current testing environment that can guarantee ultra-low leakage, A chuck that avoids breakdown and also has a wide temperature test range (-55°C to +300°C). Its advent has solved many problems in the field of wafer testing under the background of high voltage/current.

Guaranteed ultra-low leakage at high voltages

Through repeated experiments on hundreds of insulating materials under different temperature environments, and considering factors such as production costs, ERS electronic engineers have selected the most suitable insulating material to minimize the contact resistance (Rc) , to ensure ultra-low leakage current in high voltage environment. At present, the chuck supports a maximum current of 600 amps and voltages ranging from 1.5 kV to 10 kV. The specific leakage test parameters are as follows:

High voltage/high power wafer testing is done!

*Table 1: ERS electronic high voltage/current chuck leakage test parameters

Among them, 3 kV Triaxial and 3 kV ULN can reach a voltage of 3 kV through a triaxial connection at a test temperature of up to 300 degrees Celsius. For the measurement of leakage current in the high voltage test, the commonly used test tools are Keysight and Keithley’s series of leakage current detection instruments.

  High voltage/high power wafer testing is done!

*Figure 2-4: Keysight, Keithley’s series of leakage current detection instruments

10kV Coaxial (compatible with 3 kV ULN): When the test voltage is 10 kV, the coaxial connection method can block the voltage up to 10 kV. In addition, the top of the chuck is equipped with an additional direct connect cable to support high voltage/current biasing for wafer testing, and the rest is protected by a grounding setting.

Minimize contact resistance for accurate RDS(on) measurement

Minimize contact resistance, that is, minimize the contact resistance between wafer and chuck. In a broad sense, contact resistance refers to the resistance between conductors, and the real contact resistance is composed of concentrated resistance, film resistance and conductor resistance. There are many factors that affect contact resistance, such as the material of the contact, the force generated by the contact surface and perpendicular to the contact surface, the state of the contact surface, and the magnitude of the applied voltage/current.

Taking the above factors into consideration, combined with years of experience in probe station sealing and integration, the chuck engineers of ERS electronic strictly control the internal and external details of the chuck when developing high-voltage/current chucks, from electrical performance to appearance and structural design. In the production process, the advanced coating technology of ERS electronic is used to ensure the excellent hardness, roughness and flatness of the chuck surface, while reducing the contact resistance, it ensures the consistency of the contact resistance of the entire disk surface, and finally Accurate measurement of RDS(on).

Advanced coating process: guarantees the durability of the chuck

  High voltage/high power wafer testing is done!

*Figure 5: High Voltage/Current Chuck from ERS electronic

With the advanced coating process, ERS electronic’s high voltage/current chuck surface will not be peeled, deformed, oxidized and blackened after repeated use.

Unique Replaceable Chuck Top Plate Service:

Ideal Thin Wafer/Taiko Wafer Solution

  High voltage/high power wafer testing is done!

*Figure 6: Taiko Wafer

With the continuous expansion of chip application fields, chip designs tend to be diversified and customized. Based on different test types and wafer types, the corresponding test solutions are also very different. In the process of long-term contact with customers, ERS electronic found that some testing needs can be met by only replacing the top plate of the chuck. For example, if Taiko wafers need to be tested, it is not necessary to repurchase the chuck, and only replace the suitable Taiko wafers. The test chuck top plate can meet the user’s test needs while greatly reducing the cost. The on-site replacement of the chuck top plate service provided by ERS electronic is designed to cope with the continuous upgrading and changes of today’s testing needs without affecting the production efficiency of enterprises. This service has been widely praised by the industry once it was launched, and it has become a unique solution in the current wafer packaging and testing industry.

Seven reasons to choose ERS high voltage/current chuck

With many years of “customer-centric” business philosophy and strict control of product quality, in addition to a variety of optional test voltage/current and triaxial/coaxial connection methods, ERS electronic’s high voltage/current What makes the chuck stand out from many competitors is the high-voltage and high-temperature performance, which aims to ensure high voltage, ultra-low leakage, and anti-breakdown, while still meeting the customer’s test requirements for high temperature (up to 300°C). It has become one of the biggest technical highlights of ERS electronic high voltage/current chuck.

In addition, ERS electronic’s unique probe stage stage customization service makes it possible to process thin wafers, Taiko wafers and other special-shaped wafers. Customers can not only choose the type of chuck according to their needs, but also choose the service of “replacement of the top of the chuck only” to cope with the constantly escalating semiconductor market of testing needs.

As the third-party technical cooperation partner of ERS Greater China – Shanghai Jingyi Electronic Technology, its CEO Mr. Peng believes, “The high-voltage chuck launched by ERS electronic just makes up for the deficiency in the field of high-voltage/current wafer testing. Combined with high-pressure testing, first-class gold-plated surfaces, and replaceable chuck tops, it provides customers with more flexibility, while also allowing us to see endless possibilities for future technology development in the wafer test industry.”


*Figure 7: Seven reasons to choose ERS High Voltage/Current Chuck

This high voltage/current chuck belongs to the customized chuck service launched by ERS. This series of products also includes a strong vacuum chuck for wafer warpage, a high temperature uniformity chuck suitable for sensors such as temperature and humidity, and a magnetic-free chuck. chucks, and ultra-low noise chucks, etc.

The Links:   LP104S5-B2 LM190E08-TLJ5 IGBTMODULE

India’s largest carmaker Maruti Suzuki will continue to close factories for another week

On May 11, it is reported that due to the continued deterioration of the epidemic in India, India’s largest automaker Maruti Suzuki (the parent company is Japan’s Suzuki) will continue to close its factories for a week.

Maruti Suzuki, headquartered in New Delhi, India, said in an exchange filing on Saturday that it would extend the closure of its factories from May 9 to May 16 “taking into account the current COVID-19 situation.” Maruti Suzuki earlier brought forward to May 9 a maintenance shutdown originally planned for June, mainly to help deliver oxygen to hospitals, the report said.

This was confirmed in the report of the Japanese media on April 30. The Japanese media said that due to the severe situation of the epidemic in India, medical oxygen could not meet the needs of rescue, Indian hospitals have begun to use industrial oxygen for emergency rescue, and Maruti Suzuki Company due to industrial oxygen supply The shortage closed three factories in India.

Affected by the shutdown, Bhargava, chairman of Maruti Suzuki, previously revealed that some of the company’s sales outlets in India have been closed, and the shutdown may cause the company to cut its production capacity in half.

It is understood that on Friday (7th), the Japanese automaker, Honda Motor, also announced the suspension of its auto manufacturing operations in India, expanding the scope of the shutdown in the country.

The Japanese automaker has brought forward planned maintenance at its plant in India’s northwestern state of Rajasthan, which will remain closed until May 18.

Previously, Honda had suspended its Indian motorcycle manufacturing operations from May 1 to 15 in an effort to curb the spread of the new coronavirus.

It is reported that since the resurgence of the new crown epidemic in India in late March, the number of new patients per day has risen sharply, and the recent daily increase has remained above 300,000 cases per day for several consecutive days. Nearly half of the world’s total new cases. According to the latest data on May 10, there are still more than 370,000 new cases of new coronary pneumonia in India per day, and there is still no obvious sign of a decline.

In order to support India’s fight against the epidemic, the Global Times quoted Chinese Ambassador to India Sun Weidong’s Twitter message on May 9 that the first batch of 100 oxygen generators, 40 ventilators and other anti-epidemic materials donated by the Red Cross Society of China have been shipped from Chengdu. Arrived in India; according to Chinese customs data, China has supplied India with more than 5,000 ventilators, more than 21,000 oxygen generators and other large quantities of anti-epidemic materials since April.

The Links:   2MBI300TA-060 6MBP100RA060

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

【Introduction】In “Development of 25 kW Fast DC Charging Pile Based on Silicon Carbide”[1-3] In this new article in the series, we focus on the DC-DC Dual Active Phase Shift Full-Bridge (DAB-PS) Zero Voltage Switching (ZVS) converter, the introduction and partial description of which can be found in Part II.

In this section, we’ll describe some of the DC-DC-level design processes that our engineering team follows. Specifically, we explain the key design considerations and trade-offs in developing such a converter, especially around the definition of magnetic components, and discuss power supply simulation and design decisions made. In part 4, we will also discuss the concept of flux balancing in transformers and how to solve this problem in 25 kW fast DC charging piles.

1 Design DAB DC-DC stage

The DAB DC-DC converter consists of two full bridges implemented using four SiC MOSFET modules, a resonant transformer and a resonant Inductor. The system operates phase-shift modulation and achieves ZVS at high loads while maximizing efficiency over a wide output voltage range of 200 V to 1000 V. Figure 1 again shows the simplified schematic of this circuit stage previously introduced in the second part.

The converter is designed to provide the highest efficiency when the output voltage is between about 650 V and 800 V. For charging stations for 400 V batteries, the design should be adjusted to provide peak efficiency around the 400 V level.

Table 1 summarizes the main design features of this converter.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 1: A dual active bridge (DAB) DC-DC stage consists of two full bridges with an isolation transformer in between.

Table 1. Overview of required operating points for DC-DC converters.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

DAB Magnetics Design Guide

A fundamental step in designing a DAB-PS converter is to select the key parameters of the transformer and resonant inductor. The transformer turns ratio (n1/n2) will significantly affect the efficiency of the converter over the entire operating range, so the development and optimization of a DAB-PS converter is highly dependent on the magnetics.

As will be discussed below, most simulation targets are only used to generate magnetic performance requirements that meet the needs of our application. Magnetic component suppliers use this information to design and manufacture components that meet application needs while minimizing losses and size.

Transformer turns ratio (n1/n2) and efficiency

When the secondary voltage (VSEC) equals the primary voltage multiplied by the n1/n2 ratio (Equation 1, the DAB-PS converter will reach peak efficiency.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Therefore, adjust the transformer in such a way that when VSECThis peak performance operating point is reached at the target output voltage (approximately 650 V to 800 V for this project). The following simulations will show how the turns ratio can be a major determinant of converter efficiency (for a fixed switching frequency and switching technique) as it affects the transformer’s primary (IPRIM,RMSand IPRIM,PEAK) current and secondary (ISEC,RMSand ISEC, PEAK) current. Simulation will help determine which turns configuration will improve the overall efficiency and achieve the target value of 98%.

In order to get the simulation up and running, some initial values ​​for the transformer turns ratio are required. In this project, initial values ​​are proposed based on experience gathered from previous designs, market benchmarks, and technical literature, with Equation 1 as a solid foundation.

Resonant inductance (LRESONANT)

The resonant inductance value needs to be adjusted according to the leakage inductance of the transformer in the DAB-PS. Theoretically, in some designs, the inherent leakage inductance of the transformer can be used to achieve the necessary resonance to support ZVS. However, in high power applications like this project, this is not the case, so the value of the resonant inductance chosen needs to complement the leakage inductance of the transformer.

Equation 2 defines the relationship between the DAB-PS converter’s output power, primary and secondary voltages, switching frequency, phase shift, and resonant inductance (resonant inductance + transformer leakage inductance). Based on typical situations in power converters, it has been shown that fsThe higher the value, the less inductance is required.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

where P is the power transfer of DAB, VPRIMis the primary voltage, VSECis the secondary voltage, ɣ is the phase shift, fsis the switching frequency, LRESONANT+LEAKAGEis the resonant inductance + transformer leakage inductance. This formula is based on a simplified linearized model, but is useful for initial estimates.

By applying Equation 2 and comparing it to the specification of a 25 kW DC charger, it can be determined that LRESONANTwith LLEAKA value of around 22 µH would be a reasonable assumption. Table 2 shows that for the worst case (VSEC = 200 V), a rated output power of 10 kW can be provided with some margin, since the ideal maximum power transfer is 11.57 kW from a resonance point of view.

Table 2. L required to meet output power specifications over the entire output voltage rangeRESONANT+LEAK.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Excitation inductance (LM)

Excitation inductance (LM) plays an important role in optimizing transformer size and also affects overall efficiency. For a given primary voltage, higher LMwill translate into lower excitation current (IM), thereby reducing the total magnetic flux through the core and reducing the required effective cross-sectional area (Ae) (Equations 3, 4, and 5), which would make the transformer more compact.

Nevertheless, higher LMvalue implies an increase in the number of turns (n1) required, which in systems operating at high RMS currents (such as the 25 kW EV charger design in this example) results in an increase in the cross-sectional area of ​​the conductors (to allow conduction losses are controlled), which then leads to an increase in the size of the transformer to be able to accommodate the core in the available winding area of ​​the core.

Clearly, the magnetizing inductance value is an element of transformer design and optimization, but not a fixed requirement for our converters. So the approach our engineers have taken here is to rely on the magnetics manufacturer to provide an optimized design that is as compact and efficient as possible while meeting the application requirements (primarily efficiency, size and cost). However, Equations 3 to 5 help us understand how magnetizing inductance affects the terms that change transformer size and losses.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

where B is the magnetic flux density, φ is the magnetic flux, and Aeis the effective cross-sectional area (of the core).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

where µ0is the vacuum permeability, µris the relative permeability, leis the length of the magnetic path, lais the core air gap length, N is the number of turns of the primary winding, IMis the excitation current.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

where ALis the inductance coefficient.

From a control and regulation point of view, the LMIt is also important to establish a minimum value. The lower the value, the faster the control loop can run, and the acquisition and control hardware needs to support that speed.

To summarize, in this project define LMThe most important factors for an acceptable range include: maximum adjustment speed,MInfluence of peak current, influence on secondary side current (with LMdecrease and increase) and the feasibility of the magnet structure (compact).

On-off level

The switching frequency of 100 kHz was chosen based on experience gained in previous designs such as 11 kW LLC converters.[4]This value is a trade-off between relatively high switching frequency (which helps reduce magnet size) and too high switching frequency (which results in excessive switching losses).

Phase shift method and several options

For simulation purposes, a single phase shift with a fixed 50% duty cycle is used between complementary bridges. It is planned to evaluate other phase shift methods (eg extended phase shift, dual phase shift and three phase shift) at the actual control implementation level as one of the possible means to improve system performance.

Flux balance

Flux balancing techniques are designed to prevent core saturation in transformers caused by so-called flux leakage. This phenomenon, also known as the flux ladder effect, is caused by the accumulation of residual flux in the magnetic core during each switching cycle due to an unbalanced net product (volts x time) applied to the transformer – in one switching cycle It should be exactly zero during the period. When the product is not zero, the applied voltage waveform is not pure AC, but contains a DC bias component that causes residual flux.

The imbalance behind the (volts x time) product can be very subtle and difficult to identify, such as the duty cycle or R of a single half-bridgeDSONitself. In small and medium power systems, a “DC blocking capacitor” is used in series with the primary or secondary winding to filter the DC bias current. In a 25 kW charging pile design, the characteristics and requirements of this capacitor can make the component bulky or impossible to implement. The capacitance value will fall in the range of tens of microfarads, and the DC blocking voltage will be around 1000 V.

However, the most challenging and restrictive is the IPRIM,RMSand ISEC,RMSVery high, expected to be somewhere between 45 A and 65 A. A suitable solution requires about 15 to 20 ceramic capacitors in parallel, which is impractical for a number of reasons, including size, cost, layout complexity, and system reliability. An alternative is to use electrolytic capacitors or metallized polypropylene capacitors, similar to those used in the DC link of the PFC stage, but this takes up a lot of space on the PCB and increases the BOM cost.

To achieve a practical, compact and competitive design, one possible solution is to prevent the magnetic flux ladder effect. This can be accomplished in a number of ways, and there is a wealth of literature on the subject. The solution implemented in this project is a flux balancing algorithm that controls and modifies the voltage wave (duty cycle) applied to the primary and secondary windings of the transformer in order to keep them balanced so that the average DC current is zero.

The primary and secondary currents are measured as inputs to the control loop, which requires additional measurement of the primary and secondary currents of the transformer, whereas for actual converter control, only the input and output currents are sensed. On the other hand, flux balancing eliminates the need for capacitors, reducing size and cost, and increasing system efficiency. These factors, along with the engineering team’s previous expertise in implementing this technology, are the main reasons for the popularity of this approach. Part five of this article series will provide more details on implementing flux balance control techniques.

2 Preparing for Simulation

In addition to discussing the development of the PFC stage, the third part of this series [3]It also provides a broader overview of why simulation is critical in power electronics design, and the main factors to consider before running a simulation, such as goals, models, and input parameters. Keeping these factors in mind will aid in successful project development and execution. The key information for DAB-PS level power supply simulation is presented below.


The primary goal is to verify the target efficiency of the system, and thereby help select the parameters of the transformer and resonant inductor to maximize efficiency while meeting the rest of the system’s requirements. Table 3 outlines the main goals.

Table 3. Summary of the main goals of the simulation.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Simulation model

The SPICE power simulation model developed by the ON semiconductor engineering team for the DC-DC converter is shown in Figure 2. It is simpler than the power supply simulation model of the three-phase PFC stage presented in Part III, which switches three half-bridges and needs to synchronize AC grid currents and voltages. In the DAB-PS converter, the power stage uses four half-bridge cells (the same blocks used in the PFC model).

As for the transformer and resonant inductance, the model contains: the coupling ratio of Lpri to Lsec (K = 1), Lm (magnetizing inductance), Ls (secondary inductance), Lr (resonant inductance) and equivalent series resistance (for transformer and inductor winding). It should be emphasized that the core losses of the transformer and inductor are not included. At this stage, a feasible starting point for considering these factors is to estimate that the loss approximates the conduction loss.

Other components in the model include C_Pri and voltage current sensors (SPICE format) to measure primary and secondary currents for flux balance. C_Pri represents the snubber capacitor used at the DAB-PS input in parallel with the DC link. Such capacitors should be placed close to the MOSFET to suppress voltage spikes that appear on the switching node.

In a final product implementation, these capacitors may not be needed, or they may be much smaller in size, since the DC link portion of the PFC already provides filtering. However, for the purpose of this project, the DAB-PS should function properly as a stand-alone system for independent evaluation, so this capacitor is essential. As mentioned earlier, this control model employs a custom digital PWM model operating with a 50% single phase shift.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 2: Simulation model of the DAB converter.

Input parameters

Tables 4 and 5 summarize the simulation input parameters. will use n1/n2, LMand VSECalternative values ​​are evaluated and the optimal configuration is finally determined. The remaining parameters were held constant across all simulations and were chosen as a starting point based on our engineering team’s expertise in passive component design, benchmarks of existing solutions, and literature surrounding the topic.

Table 4. Simulation input parameters. Highlighted in blue are the parameters that will change in the simulation.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Table 5. Configuration for SPICE simulation.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

3 Simulation results

This section discusses the results obtained from the simulation. The test can be divided into two main evaluations, the first evaluation revolves around the transformer turns ratio n1/n2 and efficiency, and the second evaluation revolves around LM. The test results will help achieve the goals presented earlier and answer key design questions. Note that all simulations are performed at the values ​​provided in the Input Parameters section unless otherwise stated.

Transformer turns ratio (n1/n2) evaluation

Efficiency and Loss

The first and most representative results of the simulation are shown in Figures 3 and 4. Peak efficiencies are provided at secondary operating voltages of 800 V, 666.7 V, and 571 V, respectively, depending on the n1/n2 configuration. It is worth noting here that at 340 V to 830 V VSECA peak efficiency of 98% is achieved for all evaluated turns ratios over the operating voltage range (excluding inductor and transformer core losses).

However, with VSECMoving towards the low end (200 V) and high end (1000 V), the difference between the different n1/n2 ratios becomes more pronounced. Actual VSECThe farther the value deviates from the optimum point, the worse the efficiency (left and right ends of the graph in Figure 3). Interestingly, while increasing n1/n2 increases V significantlySEC > VSEC,OPTIMtotal power loss (right end of Figure 4), but reducing n1/n2 has no effect on VSEC < VSEC,OPTIMThe power loss at the time has an equally pronounced effect (left end of Figure 4).

Although increasing the n1/n2 ratio will make VSEC < VSEC,OPTIMefficiency increases (left end of Figure 3), but the difference is not as large as VSEC > VSEC,OPTIMwas as significant (right end of Figure 3). Therefore, it seems that reducing the n1/n2 ratio may lead to an increase in overall performance, although this is not always the case, depending on the overall VSECMinimum efficiency to be ensured within the scope of work.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 3: With VSECVariation in DAB efficiency for different n1/n2 ratios of voltages and transformers. Core losses of resonant inductors and transformers are not included. VDC-LINK = 800 V, LM = 720 µH.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 4: With VSECVariation in DAB power loss for different n1/n2 ratios of voltage and transformer. Core losses of resonant inductors and transformers are not included. VDC-LINK = 800 V, LM = 720 µH.

primary and secondary current

A low n1/n2 ratio also brings the disadvantage that it is usually necessary to find a sweet spot. The most prominent disadvantage is the low VSECtime IPRIM,PEAKand IPRIM,RMShigher (Figure 5), which means that the SiC MOSFET has a higher on-current.

At the same time, increasing n1/n2 will result in high VSEClower ISEC, PEAKand ISEC,RMS(Image 6). To avoid magnetic saturation, extra care needs to be taken in the transformer design for relatively high peak currents on the primary side.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 5: IPRIM,RMSand IPRIM,PEAKas a function of transformer turns ratio (VDC-LINK = 800 V, LM = 720 µH).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 6: ISEC,RMSand ISEC, PEAKas a function of secondary side voltage and transformer turns ratio (VDC-LINK = 800 V, LM = 720 µH).

Primary Voltage, Secondary Voltage and Inductor Voltage

Figure 7 depicts the voltage across the transformer windings. These are values ​​that need to be passed to the transformer manufacturer for them to calculate the required isolation.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 7: V between the two terminals of the transformerPRIM,PEAKand VSEC, PEAKVoltage as a function of secondary side voltage and transformer turns ratio (VDC-LINK = 800 V, LM = 720 µH).

Again, Figure 8 shows the voltage across the resonant inductor. In both cases, the voltage evolution follows a similar pattern, with the voltage across the two terminals increasing with VSECincreases with the increase. In all cases, the voltage value remains below 1000 V, which is not a problem for common inductors.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 8: Resonant inductor voltage across terminals as a function of secondary-side voltage and transformer turns ratio (VDC-LINK = 800 V, LM = 720 µH).

Excitation current

Transformer magnetizing current (for a given LM) is not affected by changes in n1/n over the entire VSECSignificant variation was shown over the operating voltage range (Figure 9).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 9: IMas a function of secondary side voltage and transformer turns ratio (VDC-LINK = 800 V, LM = 720 µH).

Excitation inductance (LM)Evaluate

This section describes the effect of different magnetizing inductance values ​​on system performance. Note that we performed three simulation series with different magnetizing inductances (720 μH, 300 μH, and 150 μH). In this analysis, the n1/n2 of the transformer has been fixed to 1.2:1.

In the previous chapter, a relatively high L has been usedmA fixed value (720 μH), the effect of turns ratio (n1/n2) on efficiency and other variables was evaluated. As shown in Figure 9, this choice results in a maximum IM, PEAKBelow 5 A, which seems to be in line with a common rule of thumb in power transformer design, which is to design the transformer toM, PEAKThe value of is approximately the maximum IPRIM,PEAK(82 Apeak in Figure 5) at 5% to 10%.

Figure 10 shows LMThe actual effect on efficiency is very low, at very high VSECThe bottom shows only a 0.4% difference. As mentioned in the “DAB Magnetics Design Guidelines” section, the actual value of the magnetizing inductance is not a critical requirement of the project, but is chosen by the magnetics supplier in order to make the transformer as compact as possible while meeting the remaining requirements.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 10: VDC-LINK = 800 V, DAB efficiency and power loss as a function of secondary-side voltage and magnetizing inductance for n1/n2 = 1.2:1. Core losses of resonant inductors and transformers are not included.

Another revelation from the simulation is that at different LMvalue, IPRIM,PEAKand IPRIM,RMSremained almost unchanged (Figure 11). However, this is not the case on the secondary side (Fig. 12), at different LMvalue, ISEC, PEAKand ISEC,RMSJumped from 91 Apeak to 109.6 Apeak and jumped from 49 Arms to 58.7 Arms, respectively.

From this observation and further research, we can understand how magnetizing inductance affects transformer size. ISEC,RMSincreased by a factor of 1.435 (LM = 150 µH (58.7 Arms) vs. LM= 720 µH (49 Arms)), which can be explained by the need to increase the cross-sectional area of ​​the wire by the same factor (if the winding losses remain the same). However, n2(LM= 150 µH) to 1/2.19, using the same winding cross-sectional area will reduce copper losses to 1/1.52. On top of that, n1 (the number of primary turns) is also reduced, further reducing copper losses.

Still, this improvement may come at the cost of larger cores. With LMdecrease, IM, PEAKincreased by a factor of 4.8, from 4.1 A(LM = 720 µH) to 19.9 A (LM = 150 µH), as shown in Figure 13, while n1 (and n2) are only reduced to 1/2.19 (as described above). Applying Equation 3, the product N · IMincreases, the magnetic flux density (B) increases, which triggers theecross-sectional area) in order to maintain a reasonable level of magnetic flux density (B).

This example illustrates how these elements are related and why tradeoffs are often made. However, find the transformer size and LMThe sweet spot in between usually depends on the skill and capability of the magnetics designer (as mentioned earlier).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 11: DAB IPRIM,PEAKand IPRIM,RMSVariation as a function of secondary-side voltage and magnetizing inductance (VDC-LINK = 800 V, n1/n2 = 1.2:1).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 12: DAB ISEC, PEAKand ISEC,RMSVariation as a function of secondary-side voltage and magnetizing inductance (VDC-LINK = 800 V, n1/n2 = 1.2:1).

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Figure 13: DAB IM, PEAKVariation RMS as a function of secondary-side voltage and magnetizing inductance (VDC-LINK = 800 V, n1/n2 = 1.2:1).

4 Conclusions and Design Tradeoffs

The simulations presented in the previous sections were used to verify the initial goals of the DAB converter and to help make design decisions, especially those involving transformers and resonant inductors. Tables 6 and 7 show the final selection of parameter values ​​for the system. These values ​​are passed on to the magnetics manufacturer for them to develop optimized magnetics.

The transformer turns ratio n1/n2 has been set to 1.2:1.0, as this configuration shows the best performance over the entire operating range, at VSEC = 800 V exhibits high peak efficiency (99.4%) at VSEC = 99% at 900 V, while exhibiting only a small drop in efficiency near the low side (200 V) and high side (1000 V) (Figure 3) compared to other turns ratios (1.4:1.0 and 1.0:1.0 ) performs better.

The requirements for the LM are more flexible, with ratings ranging from approximately 150 µH to 300 µH. This value is a compromise of many factors mentioned in the DAB Magnetics Design Guidelines. in IM = 20 A (and below), a minimum L should be ensuredMValues ​​are 150 µH, while ranges up to 300 µH leave L for magnetics manufacturersMvalue to provide the most compact and efficient overall transformer design possible.

10 µH was chosen as an estimate of the resonant inductance based on the recommendations in the DAB Magnetics Design Guidelines section.

Last but not least, the equivalent series resistance (ESR) value of the transformer and inductor has been proposed as the maximum reasonable estimate that conforms to other defined parameters. It goes without saying that the lower the resistance value is in the actual magnetic design, the better. This is an optimization process that magnetic component suppliers can add value to.

Table 6. Design parameters selected for the transformer. These are used to specify transformer requirements for transformer manufacturers.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

Table 7. Design parameters selected for resonant inductors. These are used to specify inductance requirements for transformer manufacturers.

25kW SiC DC Fast Charge Design Guide (Part 4): Design Considerations and Simulation for the DC-DC Stage

The next step in the development process will be to share requirements with magnetic component manufacturers and receive design proposals for magnetic components. Once samples of the magnetic components are obtained, their actual parameters can be measured and new simulations run with the improved parameters in the SPICE model. A second analysis is performed before the actual converter hardware is obtained, providing more accurate performance and loss results.

For example, core losses can be added to the simulation because magnetic manufacturers usually provide actual values. While magnetic parameters will be discussed in the next article in the series, actual measured magnetic parameters will also help enhance the control model and help advance the development of control algorithms and control loops before hardware is available. This helps speed up the development process, as using high-level models may simplify debugging and tuning the hardware.

Stay tuned for the next article in the series, Part 5, which will discuss control algorithms and implementation guidelines for control loops.

The Links:   NL12876AC18-03 BSM150GB170DLC