About Aldea
Headquartered in Miami, Aldea is a next-generation AI company focused on voice-based clinical and expert applications. Our flagship product, Advisor, uses proprietary AI to scale the impact of world-class minds across personal development, finance, parenting, relationships, and morewith faster, more cost-effective performance than traditional models.
As a multidisciplinary team of builders, researchers, and product thinkers, we value clear thinking, sharp writing, and strong intuition for what people need.
This is a rare opportunity to join an early-stage startup that will help define a new category.
About Aldea Aldea is a multi-modal foundational AI company reimagining the scaling laws of intelligence. We believe today's architectures create unnecessary bottlenecks for the evolution of software. Our mission is to build the next generation of foundational models that power a more expressive, contextual, and intelligent human–machine interface.
The Role We are hiring a Foundational AI Research Scientist (LLMs) to pioneer next-generation large-language-model architectures. Your work will focus on designing, prototyping, and empirically validating efficient transformer variants and attention mechanisms that can scale to production-grade systems.
You'll explore cutting-edge ideas in efficient sequence modeling, architecture design, and distributed trainingbuilding the foundations for Aldea's next-generation language models. This role is ideal for researchers who combine deep theoretical grounding with hands-on systems experience.
What You'll Do
- Research and prototype sub-quadratic attention architectures to unlock efficient scaling of large language models.
- Design and evaluate efficient attention mechanisms including state-space models (e.g., Mamba), linear attention variants, and sparse attention patterns.
- Lead pre-training initiatives across a range of model scales from 1B to 100B+ parameters.
- Conduct rigorous experiments measuring the efficiency, performance, and scaling characteristics of novel architectures.
- Collaborate closely with product and engineering teams to integrate models into production systems.
- Stay at the forefront of foundational research and help shape Aldea's long-term model roadmap.
Minimum Qualifications
- Requires a Ph.D. in Computer Science, Engineering, or related field.
- 3+ years of relevant industry experience.
- Deep understanding of modern sequence modeling architectures including State Space Models (SSMs), Sparse Attention mechanisms, Mixture of Experts (MoE), and Linear Attention variants.
- Hands-on experience pre-training large language models across a range of scales (1B+ parameters).
- Expertise in PyTorch, Transformers, and large-scale deep-learning frameworks.
- Proven ability to design and evaluate complex research experiments.
- Demonstrated research impact through patents, deployed systems, or core-model contributions.
Nice to Have
- Experience with distributed training frameworks and multi-node optimization.
- Knowledge of GPU acceleration, CUDA kernels, or Triton optimization.
- Publication record in top-tier ML venues (NeurIPS, ICML, ICLR) focused on architecture research.
- Experience with model scaling laws and efficiency-performance tradeoffs.
- Background in hybrid architectures combining attention with alternative sequence modeling approaches.
- Familiarity with training stability techniques for large-scale pre-training runs.
Compensation & Benefits
- Competitive base salary
- Performance-based bonus aligned with research and model milestones
- Equity participation
- Flexible Paid Time Off
- Comprehensive health, dental, and vision coverage

PI9daf8c05194d-37437-38948897