About Neurophos:
We are developing an ultra-high-performance, energy-efficient photonic AI inference system. We’re transforming AI computation with the first-ever metamaterial-based optical processing unit (OPU).
As AI adoption accelerates, data centers face significant power and scalability challenges. Traditional solutions are struggling to keep up, leading to rapidly rising energy consumption and costs. We’re solving both problems with an OPU that integrates over one million micron-scale optical processing components on a single chip. This architecture will deliver up to 100 times the energy efficiency of existing solutions while significantly improving large-scale AI inference performance.
We’ve assembled a world-class team of industry veterans and recently raised a $110M Series A led by Gates Frontier. Participants include M12 (Microsoft’s Venture Fund), Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others. We have also been recognized on the EE Times Silicon 100 list for several consecutive years.
Join us and shape the future of optical computing!
Location: San Francisco Bay Area or Austin, TX. Full-time onsite position.
Position Overview:
We are seeking experienced hardware modeling engineers to develop sophisticated functional and performance models that define the next generation of Neurophos chips. You will implement models of novel compute blocks, including optical GEMM engines, SRAM vector processors, and dataflow architectures within our YinYang event-driven framework. This role offers the opportunity to work on cutting-edge hardware that doesn't exist anywhere else while shaping modeling methodology from the ground up.
Key Responsibilities:
Implement functional models (fmod) of optical compute engines, vector processors, and memory systems
Develop performance models (pmod) with discrete-event timing and power estimation
Work within the YinYang (libyy) event-driven framework to build modular, reusable components
Design clean abstractions and interfaces between hardware blocks
Integrate with Verilator/SystemVerilog for RTL co-simulation and validation
Build trace infrastructure for both coupled and independent simulation modes
Validate models against RTL and contribute to architectural validation efforts
Collaborate with architects, RTL designers, and software engineers
Optimize simulation performance while maintaining modeling fidelity
Qualifications:
BS, MS, or PhD in Computer Engineering, Electrical Engineering, or Computer Science
5-7+ years of experience in hardware modeling, functional simulation, or performance modeling
Strong C++ programming skills (modern C++17/20/23 preferred)
Experience with hardware modeling frameworks, transaction-level modeling, or event-driven simulation
Understanding of computer architecture fundamentals (pipelines, memory systems, accelerators)
Ability to balance modeling fidelity with simulation speed based on analysis objectives
Strong debugging and validation skills for complex hardware models
Effective communication and collaboration across hardware/software teams
Python proficiency for scripting, analysis, and automation
Preferred Skills:
Experience with SystemC, TLM 2.x, or custom event-driven simulation frameworks
Background in accelerator modeling (GPU, TPU, NPU, DSP)
Familiarity with Verilator, SystemVerilog, or RTL co-simulation
Knowledge of memory system modeling (HBM, DRAM, caches)
Understanding of ML workloads and framework internals (PyTorch, TensorFlow)
Experience with performance analysis, profiling, and bottleneck identification
Exposure to power modeling frameworks (McPAT, Cacti)
Background in optical computing, photonics, or analog computing
Experience with trace-driven simulation methodologies
What We Offer:
A pivotal role in an innovative startup redefining the future of AI hardware.
A collaborative and intellectually stimulating work environment.
Competitive compensation, including salary and equity options.
Opportunities for career growth and future team leadership.
Access to cutting-edge technology and state-of-the-art facilities.
Opportunity to publish research and contribute to the field of efficient AI inference.
This is a rare opportunity to work on a game-changing technology at the intersection of photonics and AI. As part of our elite team, you’ll contribute to a platform that redefines computational performance and accelerates the future of artificial intelligence. Be a key player in bringing this transformative innovation to the world.
Top Skills
Neurophos Austin, Texas, USA Office
Austin, TX, United States
Similar Jobs
What you need to know about the Austin Tech Scene
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center



