About TylSemi, Inc.
The Opportunity
The AI infrastructure market is exploding. Every hyperscaler, every cloud provider, every AI company is building custom silicon. But they all face the same problem: how do you connect hundreds of chips, deliver clean power at scale, and move terabits of data without melting the package?
That's what we solve. TylSemi builds the chiplet infrastructure IP — the IO, power delivery, and interconnect building blocks — that makes AI/HPC systems actually work at scale.
This isn't a nice-to-have. It's the critical path.
Why NowThe Market Window
The semiconductor industry is going through its biggest architectural shift in 40 years:
• Moore's Law is dead. 2nm and beyond delivers marginal performance gains. The future is chiplets, not monolithic dies.
• Custom silicon is now mainstream. Google, Microsoft, Amazon, Meta, OpenAI — they're all designing their own ASICs. The $50B custom silicon market is growing 30% annually.
• IO and power are the bottleneck. Solve hard problems and provide something which is a category in itself.
Translation: We're entering the market at exactly the moment when every major AI/HPC player needs what we're building, and their alternatives are disappearing.
Culture & Team: How We WorkNo Politics, No Bureaucracy
There are no layers, no approval chains, no corporate theater.
• If you have an idea, we test it. If it works, we ship it.
• No endless meetings, no PowerPoint presentations to convince middle management.
Remote-Friendly, Global Team
• US team: Bay Area preferred, but we hire the best people regardless of location
• India team: Building a world-class design center in Bangalore
Move Fast, Ship Real Products
We're not a research project. We have paying customers, committed capital, and aggressive timelines.
This is a company, not a lifestyle business. We're building to win.
What We Value
• Ownership mindset. You're not here to execute someone else's roadmap. You're here to define it.
• Bias for action. We move fast. Analysis paralysis doesn't fly here.
• Deep technical expertise. This is hard engineering. We need people who've shipped real silicon and debugged real hardware.
• Low ego, high standards. We don't care about titles or politics. We care about results.
The Ask
If you're reading this, you're probably comfortable. You have a good job at a stable company with all the benefits.
We're asking you to walk away from that and bet on us.
Here's why you should:
• The market is real. AI infrastructure spending is $200B+ annually and growing 40% YoY. Every hyperscaler needs what we're building.
• The team has done this before. We've built and exited semiconductor companies at scale. This isn't our first rodeo.
• The traction is de-risked. We have LOIs, strategic investors, and a clear path to revenue.
• The work is consequential. You're not optimizing someone's ad click-through rate. You're building the silicon infrastructure that powers AI.
This is the bet. Join us and build something that matters.
Or stay comfortable. No judgment.
But if you're the kind of person who wants to take the shot, we'd love to talk.
READY TO JOIN?
Role Overview
We are looking for a hands-on and highly strategic IT & Infrastructure Admin to build and manage the end-to-end compute, storage, network, and EDA infrastructure required for designing complex SoCs across digital and analog domains.
This role goes beyond traditional IT—it requires deep ownership of EDA environments, compute strategy (cloud vs on-prem), cost optimization, and AI infrastructure enablement, ensuring high performance, scalability, and reliability for engineering teams.
Key Responsibilities
EDA & Engineering Infrastructure
Own setup, deployment, and management of
EDA tools and environments for: Digital design and verification Analog and custom design flows Manage
tool installations, upgrades, and compatibility across flows Drive
EDA license management, including: Forecasting demand across teams and projects Optimizing utilization and cost Vendor coordination and negotiations Ensure high availability and performance of
compute farms and storage systems Compute & Platform Strategy
Define and execute strategy for
cloud vs on-prem infrastructure: Evaluate AWS (or other cloud platforms) vs owned/rented servers Build cost models and ROI analysis for different scaling scenarios Design scalable infrastructure for: Large regressions (DV workloads) RTL synthesis and physical design Analog simulations (compute-intensive workloads) Optimize
job scheduling, workload distribution, and resource utilization Network & Systems Management
Design and manage
high-performance network infrastructure: Low-latency, high-throughput connectivity for EDA workloads Secure remote access for distributed teams Manage: Servers, storage (NAS/SAN), and backup systems OS environments (primarily Linux-based) Data security, access control, and disaster recovery
AI Infrastructure & Enablement
Support deployment and scaling of
AI/ML infrastructure for engineering workflows Work with AI and engineering teams to: Enable
AI agent workflows Optimize compute usage (GPU/CPU allocation) Define and enforce
AI usage guardrails, including: Data security and IP protection Safe usage policies for internal and external AI tools Manage
token usage, cost tracking, and access control for AI platforms
Planning, Forecasting & Cost Optimization
Develop and maintain forecasts for: Compute infrastructure (cloud + on-prem) EDA licenses Storage and network capacity Continuously optimize for
cost vs performance vs scalability trade-offs Provide leadership with
data-driven recommendations on infrastructure investments
Required Qualifications
Bachelor’s degree in Computer Science, Electrical Engineering, or related field 10+ years of experience in
IT infrastructure / systems engineering, preferably in semiconductor or EDA environments Strong experience with:
EDA tool environments (Synopsys, Cadence, Siemens/Mentor) Linux system administration Compute cluster management and job schedulers (LSF, Slurm, etc.) Experience managing
large-scale compute and storage systems Strong understanding of
networking fundamentals (high-performance networks preferred) Experience with
cloud platforms (AWS preferred) Preferred Qualifications
Experience supporting
SoC design teams (RTL, DV, Analog) Familiarity with
analog simulation environments and their compute demands Experience with
hybrid cloud architectures Exposure to
GPU infrastructure and AI/ML workloads Scripting skills (Python, Bash, etc.) for automation Experience with
security and compliance in IP-sensitive environments Key Attributes
Strong ownership and
end-to-end accountability mindset Ability to balance
technical depth with strategic decision-making Bias toward
automation, scalability, and efficiency Strong problem-solving and operational excellence Comfortable working in a
fast-paced startup environment Success Metrics
Reliable, scalable infrastructure supporting
high engineering productivity Optimized
EDA license utilization and cost efficiency Effective
cloud vs on-prem strategy with measurable ROI Minimal downtime and high system availability Secure and efficient
AI infrastructure adoption Ability to scale infrastructure seamlessly with company growth

PI530f9f4dd49c-37437-40452887