Researchers Use an EUV Flashlight and Free-Air Diffusion, Plus a Clever Algorithm, to Inspect Chips
When you're playing with extreme ultraviolet light, lenses aren't going to cut it — but a TensorFlow-accelerated algorithm will.
Researchers from the Delft University of Technology and Utrecht University have come up with a way to identify defects in computer chips built using extreme ultraviolet (EUV) lithography — by creating an EUV camera feeding into a TensorFlow machine learning system.
"Immediately following lithography, the structures formed in the photoresist layer on the wafer surface must be measured to identify defects," the research team explains of part of the chip manufacturing process they aim to address. "This is crucial to improve the production yield since defective chips can still be reworked by stripping the photoresist layer at this stage. Otherwise, such defective chips will continue to be processed through many unnecessary steps, causing an enormous waste of resources."
Moore's Law, the observation by Intel co-founder Gordon Moore that the number of transistors on a leading-edge semiconductor trends towards a doubling roughly every two years, has become a must-chase goal for the industry — and the only reason we're not using processors the size of football fields is that the growth in transistor numbers is being matched by a corresponding shrinkage in their size. Modern chips use a feature size so small — measured in individual nanometers — that the lithographic process requires small-wavelength extreme ultraviolet light, known as EUV lithography.
That poses a problem: in order to inspect the wafer with the same precision, you need to use the same EUV light — but building an EUV optical system is prohibitively expensive, as extreme ultraviolet radiation is absorbed by most surfaces. As a result, you can't just take an EUV photo using a lens and an image sensor; you need to use specially-designed, difficult-to-make, and extremely expensive mirrors. Or you could do away with an imaging system at all, as in the team's work — capturing EUV light through diffractive imaging instead.
There's a new problem there, however: while the team was able to use a high harmonic generation (HHG) device as a tabletop EUV source and an EUV-sensitive image sensor to capture the diffracted far-field light in free space, the resulting data is far from what you could call an image. The final piece of the puzzle: heavy computational algorithms, accelerated using TensorFlow running on a high-performance graphics processor — an approach already in heavy use for the current artificial intelligence (AI) boom. "The reconstruction is done iteratively by 'solving the inverse problem,'" the team explains.
"While the forward problem is to build a model to simulate the process of ptychography [computational microscopic imaging based on diffusion] based on model parameters" the researchers continue, "the related inverse problem is to retrieve model parameters from experimental data, which are the input and output pairs of the model. To update model parameters, we also need a loss (error) function to compare the predicted and measured diffraction patterns and computing the gradient of the model parameters."
The team's work has been published in the journal Light: Science & Applications under open-access terms.