Postdoctoral Fellow/Research Scientist – AI Foundation: Models, Algorithms & Safety

Zhongguancun Academy - Zhongguancun Institute of Artificial Intelligence


Job description

We are seeking exceptional, visionary, and deeply motivated Research Scientists to join our AI foundation research team.

  1. Next-Generation AI Architectures & Capabilities

The era of monolithic, autoregressive transformers, while powerful, is just one chapter in the story of AI. We are looking to write the next one. This research thrust focuses on moving beyond current limitations to build models that are more efficient, inherently multimodal, and capable of processing information at an unprecedented scale. Key research questions include (but not limited to):

  • Non-Autoregressive and Diffusion-Based Generative Models: How can we fundamentally redesign generation processes for superior efficiency, controllability, and quality? We are exploring diffusion models, flow-matching, and other parallel decoding techniques for text, images, audio, and complex structured data.
  • Natively Multimodal Intelligence: The world is not text; it is a rich tapestry of sensory inputs. We aim to move beyond late-fusion and simple embedding alignment to develop architectures that can process and reason across modalities (vision, language, audio, sensor data) from the ground up. How do we build a truly unified representation space?
  • Conquering Ultra-Long Context: The ability to process and reason over vast contexts is a cornerstone of higher intelligence. We are investigating novel architectural solutions—from state-space models and advanced attention mechanisms to new memory paradigms—to enable models to understand and synthesize information from millions of tokens, entire codebases, or extensive scientific literature.
  1. Foundational Algorithms for AGI

AGI will not emerge from scaling existing models alone; it requires a new algorithmic foundation for learning, reasoning, and adaptation. This research area is dedicated to discovering and refining the core mechanisms that will enable machines to learn continuously, make robust decisions in complex environments, and evolve autonomously. Key research directions include:

  • Advanced Reinforcement Learning: From sample-efficient offline RL to multi-agent coordination and hierarchical RL, we are developing algorithms that can learn complex behaviors and strategies in dynamic, uncertain worlds.
  • Multi-Objective & Black-Box Optimization: Real-world problems rarely have a single, simple objective. We research methods to navigate complex trade-offs (e.g., performance vs. safety vs. efficiency) and to optimize systems where gradients are unavailable or intractable, which is crucial for interacting with real-world systems.
  • Unsupervised and Self-Supervised Learning at Scale: The future of AI is label-free. We are pioneering next-generation self-supervised methods that can learn rich, transferable representations from vast, unlabeled multimodal data, forming the bedrock of our foundational models.
  • Continual Learning and Autonomous Evolution: We aim to create systems that never stop learning. This involves overcoming catastrophic forgetting, enabling positive knowledge transfer over a lifetime of experience, and exploring mechanisms for models to self-improve and evolve their own architectures and learning algorithms.
  1. Interpretable, Trustworthy, and Safe AI

As AI systems become more powerful and autonomous, ensuring their safety, transparency, and alignment with human values is not an option—it is a necessity. This research pillar is dedicated to the science of building trustworthy AI. We believe safety and interpretability should be designed in, not bolted on. Key research problems include:

  • Mechanistic Interpretability: Moving beyond post-hoc explanations to truly understand the internal computations and circuits within large models. Can we reverse-engineer the “algorithms” that a neural network has learned?
  • Value Alignment & AI Ethics: How do we formally define and embed complex human values and ethical principles into AI systems? We are researching novel techniques, from preference learning to constitutional AI, to ensure that model behavior remains beneficial and predictable.
  • Uncertainty Quantification and Calibration: A reliable system knows what it doesn’t know. We are developing rigorous methods for models to estimate the uncertainty in their predictions and decisions, a critical capability for high-stakes applications like science and medicine.
  • AI Security and Robustness: Proactively securing our models against a growing landscape of threats, including advanced adversarial attacks, data poisoning, and privacy breaches. This involves building inherently robust systems and developing new defense mechanisms.
  1. Fundamental Theory of Deep Learning

Empirical success must be supported by a solid theoretical foundation. This pillar seeks to answer the “why” behind deep learning’s effectiveness and to guide the development of future algorithms through rigorous mathematical analysis. We are committed to building a bridge between theory and practice. Core topics of investigation are:

  • Expressivity, Optimization, and Generalization: What are the fundamental limits of what neural networks can represent? Why do over-parameterized models trained with simple optimizers generalize so well? We explore these questions through the lenses of statistical learning theory, optimization theory, and information theory.
  • Robustness and Stability Analysis: Developing a formal understanding of model stability and robustness to perturbations in the input data or model parameters. This includes leveraging tools from dynamical systems, control theory, and functional analysis.
  • Game Theory and Multi-Agent Dynamics: Analyzing and predicting the emergent behaviors of interacting AI agents using the tools of game theory and mechanism design. This is essential for understanding AI ecosystems and ensuring cooperative outcomes.
  • Bandit Theory and Online Learning: Developing the theoretical underpinnings for efficient exploration-exploitation trade-offs in online decision-making settings, which is crucial for adaptive and interactive AI systems.

Key Responsibilities

  • Define the Frontier: Maintain a forward-thinking and astute academic vision and technical sensitivity. Identify, define, and champion ambitious, long-term research agendas that have the potential for groundbreaking impact.
  • Publish with Impact: Conduct and lead state-of-the-art research that results in publications in the most prestigious and competitive academic venues (e.g., NeurIPS, ICML, ICLR, CVPR, ACL, Nature, Science). Your work should not only be published but should also influence the direction of the academic community.
  • Drive Open Science: Make a tangible impact on the open-source community by releasing high-quality, well-documented code, novel datasets, and pre-trained models. Champion the principles of reproducible and open research.
  • Mentor and Cultivate Talent: Guide and mentor junior researchers, PhD student interns, and visiting scholars. Foster their intellectual growth, co-author publications, and help cultivate the next generation of leading AI researchers.
  • Collaborate and Communicate: Engage in deep technical discussions and collaborations with a diverse, interdisciplinary team of world-class researchers and engineers. Clearly articulate complex research ideas and contribute to a shared intellectual environment that propels projects forward.

Minimum Qualifications

  • A Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Statistics, Mathematics, Electrical Engineering, or a related technical field.
  • A strong publication record, with first-author papers in top-tier, peer-reviewed AI/ML conferences or journals (e.g., NeurIPS, ICML, ICLR, CVPR, ACL, TPAMI, etc.).
  • Excellent communication and interpersonal skills, with the ability to articulate complex technical concepts clearly and collaborate effectively in a team environment.
  • Demonstrated proficiency in scientific programming and deep learning frameworks, such as Python, PyTorch.

Preferred Qualifications

  • Demonstrated ability to initiate, lead, and complete a research agenda from ideation to high-impact publication.
  • Postdoctoral research experience or multiple years of research experience in an academic or industrial lab setting.
  • A track record of highly cited work, indicating significant influence and impact within the research community.
  • Experience as  Reviewer/AC/PC for top-tier AI/ML conferences and journals, or as an organizer for workshops or tutorials.
  • Winner of prestigious awards, such as best paper awards at major conferences, dissertation awards, or notable academic fellowships.
  • Significant contributions to well-known open-source projects or a personal portfolio of impactful open-source research code.
  • Experience with large-scale distributed training and high-performance computing (HPC) environments.


Apply now

To help us track our recruitment effort, please indicate in your email/cover letter where (ngojobstenders.com) you saw this job posting.