Design & Reuse

Andy Nightingale, VP of Product Marketing at Arteris – Interview Series

unite.ai, Apr. 22, 2025 – 

Andy Nightingale, VP of Product Marketing at Arteris is a seasoned global business leader with a diverse background in engineering and product marketing. He’s a Chartered Member of the British Computer Society and the Chartered Institute of Marketing, and has over 35 years of experience in the high-tech industry.

Throughout his career, Andy has held a range of roles, including engineering and product management positions at Arm, where he spent 23 years. In his current role as VP of product marketing at Arteris, Andy oversees the Magillem system-on-chip deployment tooling and FlexNoC and Ncore network-on-chip products.

Arteris is a catalyst for system-on-chip (SoC) innovation as the leading provider of semiconductor system IP for the acceleration of SoC development. Arteris Network-on-Chip (NoC) interconnect intellectual property (IP) and SoC integration technology enable higher product performance with lower power consumption and faster time to market, delivering proven flexibility and better economics for system and semiconductor companies, so innovative brands are free to dream up what comes next.

With your extensive experience at Arm and now leading product management at Arteris, how has your perspective on the evolution of semiconductor IP and interconnect technologies changed over the years? What key trends excite you the most today?

It’s been an extraordinary journey—from my early days writing test benches for ASICs at Arm to helping shape product strategy at Arteris, where we're at the forefront of interconnect IP innovation. Back in 1999, system complexity rapidly accelerated, but the focus was still primarily on processor performance and essential SoC integration. Verification methodologies were evolving, but interconnect was often seen as a fixed infrastructure—necessary but not strategic.

Fast-forward to today and interconnect IP has become a critical enabler of SoC (System-on-Chip) scalability, power efficiency, and AI/ML performance. The rise of chiplets, domain-specific accelerators, and multi-die architectures has placed immense pressure on interconnect technologies to become more adaptive, innovative, physically, and software-aware.

One of the most exciting trends I see is the convergence of AI and interconnect design. At Arteris, we’re exploring how machine learning can optimize NoC (Network-on-Chip) topologies, intelligently route data traffic, and even anticipate congestion to improve real-time performance. This is not just about speed—it's about making systems more innovative and responsive.

What excites me is how semiconductor IP is becoming more accessible to AI innovators. With high-level SoC configuration IP and abstraction layers, startups in automotive, robotics, and edge AI can now leverage advanced interconnect architectures without needing a deep background in RTL design. That democratization of capability is enormous.

Another key shift is the role of virtual prototyping and system-level modeling. Having worked on ESL (Electronic System Level) tools early in my career, it’s rewarding to see those methodologies now enabling early AI workload evaluation, performance prediction, and architectural trade-offs long before silicon is taped out.

Ultimately, the future of AI depends on how efficiently we move data—not just how fast we process it. That’s why I believe the evolution of interconnect IP is central to the next generation of intelligent systems.

Arteris' FlexGen leverages AI driven automation and machine learning to automate NoC (Network-on-Chip) topology generation. How do you see AI’s role evolving in chip design over the next five years?

AI is fundamentally transforming chip design, and over the next five years, its role will only deepen—from productivity aid to intelligent design partner. At Arteris, we’re already living that future with FlexGen, where AI, formal methods, and machine learning are central to automating Network-on-Chip (NoC) topology optimization and SoC integration workflows.

What sets FlexGen apart is its blend of ML algorithms—all combined to initialize floorplans from images, generate topologies, configure clocks, reduce Clock Domain Crossings, and optimize the connectivity topology and its placement and routing bandwidth, streamlining communication between IP blocks. Moreover, this is all done deterministically, meaning that results can be replicated and incremental adjustments made, enabling predictable best-in-class results for use cases ranging from AI assistance for an expert SoC designer to creating the right NoC for a novice.

Over the next five years, AI’s role in chip design will shift from assisting human designers to co-designing and co-optimizing with them—learning from every iteration, navigating design complexity in real-time, and ultimately accelerating the delivery of AI-ready chips. We see AI not just making chips faster but making faster chips smarter.

Click here to read more...