
Austin-based photonics startup Neurophos has raised $110M in a Series A funding round to accelerate development of its optical processing units designed for high-performance, energy-efficient AI inference.
The company is building tiny optical processors that aim to significantly outperform traditional silicon-based AI chips in both speed and power consumption.
The round was led by Gates Frontier, with participation from Microsoft M12, Carbon Direct, Aramco Ventures, Bosch Ventures, Tectonic Ventures, and Space Capital.
From metamaterials research to AI hardware
Neurophos emerged from research originally conducted at Duke University, where advances in metamaterials were first explored for manipulating electromagnetic waves. Building on this foundation, the company has developed a metasurface-based optical modulator that can function as a tensor core, performing matrix-vector multiplication at the heart of modern AI workloads.
Instead of relying on transistors and electrical gates, Neurophos’ approach uses light to perform computation. Thousands of these modulators can be integrated onto a single chip, forming what the company calls an optical processing unit designed specifically for AI inference.
Tackling energy efficiency in AI compute
One of the biggest constraints facing AI infrastructure is power consumption. As models scale, traditional GPUs and TPUs consume increasing amounts of energy and generate substantial heat. Neurophos argues that optical computing offers a structural advantage, as light-based computation produces less heat and enables faster data movement.
The company claims its metasurface modulators are orders of magnitude smaller than conventional optical components, allowing dense integration and reducing the need for frequent digital-to-analog conversions, a common bottleneck in photonic systems.
Performance claims and early market interest
According to Neurophos, its optical processor is designed to operate at extremely high frequencies, delivering substantially higher peak operations per second than current-generation AI GPUs while consuming less power. The company says it has already signed multiple early customers and reports strong interest from hyperscalers and enterprise AI infrastructure teams.
While the AI chip market remains dominated by incumbent players, Neurophos positions its technology as a fundamentally different approach rather than an incremental improvement on existing silicon architectures.
Path to production and use of funds
Neurophos expects its first chips to reach the market by mid-2028. In the meantime, the new funding will be used to build its first integrated photonic compute systems, including data center-ready modules, a full software stack, and early-access hardware for developers.
The company is also expanding its headquarters in Austin and opening a new engineering site in San Francisco. Neurophos says its chips are designed to be manufactured using standard silicon foundry processes, aiming to avoid the mass-production challenges that have historically limited optical computing.
As AI inference workloads continue to grow, Neurophos is betting that optical processors can offer a step-change in efficiency and performance required to support the next generation of AI systems.