As the race toward 2nm nodes and advanced 3D architectures intensifies, the use of AI for discovery of new materials will be key to achieving a competitive advantage.
www.eetimes.com, Jul. 01, 2025 –
From mobile processors to memory chips and sensors, each new generation of semiconductor devices demands materials that are thinner, faster, and more thermally efficient.
As fabrication processes move toward 2nm nodes, 3D integration, 2D materials for ultra-thin channels, high-k dielectrics, and heterogeneous packaging, the need for novel materials used in etching, deposition, and thermal management has never been greater. But discovering and validating these materials is not just a manufacturing challenge; it’s a simulation one.
The simulation and validation of new compounds face increasing constraints from traditional modeling tools, such as Density Functional Theory (DFT). DFT is known for its accuracy, but it’s notoriously slow and compute-intensive.
Running a single calculation for a moderately complex material can take hours or even days, even on a high-performance computing cluster. DFT also has low throughput, as the simulations don’t scale well when evaluating tens of thousands, or even millions, of possible compounds or structures.
Finally, DFT is hard to parallelize at scale. Unlike AI or big data workloads, many atomistic simulations have dependencies that make them difficult to parallelize efficiently.
In short, the pace of discovery with DFT cannot keep up with industry needs, especially in sectors such as semiconductors, where rapid product cycles demand continuous material innovation.
The gap between simulation time and innovation cycles means that by the time researchers fully validate a new material, the technology node it targets may have already become obsolete.
To overcome these limitations, new AI-driven approaches are emerging. One of the most promising is the use of neural network potentials (NNPs). Trained on large-scale atomic datasets, NNPs now offer DFT-level accuracy with simulation speeds orders of magnitude faster.
Among these, the PreFerred Potential (PFP) model—developed and refined by Preferred Computational Chemistry, Inc. (PFCC)—is setting a new standard. PFP is a universal NNP that supports interactions across 96 elements, enabling researchers to simulate a broad range of material systems.
Unlike DFT, which recalculates interatomic forces from first principles every time, PFP leverages learned patterns to infer outcomes, delivering DFT-level accuracy at speeds up to 20 million times faster.
PFP also enables the large-scale screening of materials candidates across vast chemical and structural design spaces; a feat not feasible with DFT due to time and cost constraints.
Other benefits include broad element coverage, allowing researchers to explore a wider array of materials, the ability to scale to large systems, and lower computational costs. Besides, the PFP’s universality doesn’t require prior fine-tuning for any material.
These advantages make PFP a game-changer for the industry as it pushes towards novel materials.
While those benefits are significant, they only tell part of the story – true success with PFP stems from the effectiveness of its application in real-world R&D workflows.
It is a key part of PFCC’s role to help semiconductor leaders move forward more quickly. By integrating PFP into their discovery workflows, customers can reduce simulation costs, explore broader material landscapes, and maintain the agility needed to keep pace with Moore’s Law.
For example, researchers have utilized PFP to identify new dielectric materials with enhanced breakdown voltages rapidly, simulate multi-layer thin films for advanced packaging architectures, and expedite catalyst screening for etching chemistries in sub-3nm nodes.