Learn about neural architecture search, including what it is, how to use it, and which steps you can take to build the foundational knowledge needed to master this machine learning technique.
![[Featured Image] Two machine learning professionals use neural architecture search as they both look at a computer screen in a large office space.](https://d3njjcbhbojbot.cloudfront.net/api/utilities/v1/imageproxy/https://images.ctfassets.net/wp1lcwdav1p1/OIwpTmBpRd8WCVlGSe4xX/fc6e49cfb29fa430ecc4579defd2a5ea/GettyImages-2215335684.jpg?w=1500&h=680&q=60&fit=fill&f=faces&fm=jpg&fl=progressive&auto=format%2Ccompress&dpr=1&w=1000)
Neural architecture search (NAS) is a machine learning method that automatically finds the optimal neural network architecture for your task. Here are some important things to know:
NAS-Bench-101 includes over five million trained NAS models, each with recorded training, validation, and test results, to help you compare NAS model designs [1].
NAS methods divide into three main components: the search space, the search strategy, and the performance estimation strategy.
You can use artificial intelligence tools such as Vertex AI Neural Architecture Search and Microsoft Archai to speed and scale the development of your NAS models.
Explore neural architecture search design principles to help you build efficient, accurate, and reproducible search algorithms. If you’re ready to learn more, enroll in the Deep Learning Specialization from DeepLearning.AI. You’ll have the opportunity to train test sets, analyze variance for deep learning applications, use standard techniques and optimization algorithms, and build neural networks in TensorFlow.
NAS is a technique within automated machine learning (AutoML) that focuses on using algorithms to automatically design the structure of neural networks. Instead of manually designing your neural network architecture, NAS uses a data-driven algorithm to optimize the selection of nodes and connections between them based on your specifications for model size, latency, accuracy, and computational demand.
As neural networks become larger and more complex, manually selecting variables has become nearly infeasible. In previous years, researchers relied on their subject matter expertise, and the design processes required a time-intensive experimentation phase to consider several architecture design and specification options. Because of this, researchers and developers now turn to NAS to automate complex workflows, execute trial-and-error processes, and find the best-performing neural network design millions of times more efficiently than would be possible using human power alone.
A neural network architecture encompasses the components and design of a neural network, including the input data, variable weights, transfer function, activation function, biases, and output function. The architecture of your neural network influences the way your model learns to recognize patterns, including how it transforms data from your inputs to your outputs. Depending on your task, you can choose between several specialized architectures, such as:
• Convolutional neural networks (CNNs): Excel at image recognition and computer vision
• Recurrent neural networks (RNNs): Excel at speech recognition and time series forecasting
• Graph neural networks (GNNs): Excel in data mining, recommender systems, and bioinformatics
• Transformers: Excel in natural language processing and GPTs
Learn more: 4 Types of Neural Network Architecture
When you use NAS, the algorithm generally goes through two phases to determine the optimal network architecture for your use case. First, the algorithm goes through the search phase, in which it explores algorithms within a search space. Following this is the evaluation phase, in which the top-performing algorithm is assessed and validated with test data before final selection.
You can break these phases into three main components: the search space, the search strategy, and the performance estimation strategy. Consider each in more detail.
The search space is the set of neural architectures available for your NAS algorithm to consider. The definition of your search space depends on your priorities. For example, a larger search space may require more financial investment, but you’re more likely to discover novel architectures. With a smaller search space, you can limit computational costs, but you may not find the absolute best option for your use case.
Your search strategy also impacts the way your NAS algorithm explores the search space. Common algorithms you might choose include:
Reinforcement learning: Uses controllers (often RNNs) to generate potential architectures and receive feedback on performance. Over time, it learns which design choices lead to stronger models.
Gradient-based method: Treats the search space as a continuous function that can be optimized using gradient descent. This allows the algorithm to efficiently and iteratively tune models, reducing memory consumption and related computational requirements.
Evolutionary algorithms: Simulate natural selection by creating a population of model architectures that evolve through mutation and selection. With each “generation,” the algorithm retains high-performing models, refining them over time to create the optimal design.
The performance estimation phase involves evaluating potential architectures for their effectiveness. In an ideal scenario, your algorithm could perform full training and validation on each option. However, because of the computational demand this requires, several more efficient methods are common in real-world applications. Modern NAS methods use techniques like early stopping, surrogate modeling, and weight-sharing to estimate performance efficiently.
When choosing an NAS method, it’s important to balance the resources you have available with performance goals. NAS methods like reinforcement learning and evolutionary algorithms tend to have strong performance, but remain computationally expensive, as they require thousands of GPU-hours to train and evaluate large numbers of potential architectures. More efficient options, like gradient-based NAS, approximate training outcomes and reduce the total number of models the algorithm needs to evaluate, significantly reducing computational costs.
In recent years, hybrid approaches have become more popular for NAS search strategies. These combination approaches can capitalize on the strength of multiple methods, helping to effectively design neural networks that can solve complex problems.
For example, reinforcement and evolutionary learning hybrids (EvoRL) utilize reinforcement learning combined with evolutionary selection methods. Reinforcement learning helps guide exploration, and evolutionary selection methods preserve and refine high-performing architectures using crossover and mutation methods. Another hybrid method combines gradient-descent optimization with evolutionary methods (EST-NAS), using gradient-based capabilities to identify promising architectures, then capitalizing on evolutionary approaches to explore alternatives and enhance efficiency at a lower computational cost.
You can use several AI tools for NAS, including Vertex AI Neural Architecture Search and Microsoft Archai.
Vertex AI Neural Architecture Search is an optimization tool from Google Cloud that automates the design and fine-tuning of machine learning models. It features built-in search and performance estimation strategies designed to help you reliably discover optimal architectures, even at an enterprise-level scale. You can set constraints such as latency, memory, and power requirements, and the algorithm will find the most accurate model within these bounds.
Archai is a NAS framework by Microsoft that specializes in efficient model design with a focus on reproducibility. It provides methods for you to specify preferences across the performance, latency, and hardware constraints of your model. Additionally, you can take advantage of several methods by combining techniques into hybrid models.
You can start building foundational knowledge needed to employ NAS methods by exploring the fundamentals of neural network architectures, including CNN, RNN, GNN, and transformer-based designs. Following this, exploring search space options, such as deep neural networks (DNNs), cell-based spaces, and topology-based spaces, can help you learn to identify the best structure for your NAS model. This, when combined with learning about performance estimation strategies, such as weight sharing and surrogate models, can provide a strong foundation for experimenting with different design strategies.
Once you have foundational knowledge, you can explore tools such as NAS-Bench-101, which is a benchmark data set for comparing NAS algorithms. You can use this as a starting point to test and evaluate new approaches in a controlled environment. NAS-Bench-101 includes over five million trained models, each with recorded training, validation, and test results [1]. By using this, you can test new NAS methods and experiment with optimization algorithms without having to start from scratch, making it a powerful way to build your understanding and expertise within NAS.
Explore how to design different types of deep learning algorithms and expand your AI skills with a subscription to our LinkedIn newsletter, Career Chat. Then, check out the following resources to keep learning:
Take the quiz: AI Career Quiz: Is It Right for You? Find Your Role
Watch on YouTube: Master Deep Learning Fundamentals with Andrew Ng on Coursera
Hear from experts: Bots & Blueprints: 6 Questions with a Software Architect and AI Developer
Whether you want to develop a new skill, get comfortable with an in-demand technology, or advance your abilities, keep growing with a Coursera Plus subscription. You’ll get access to over 10,000 flexible courses.
arXiv. “NAS-Bench-101: Towards Reproducible Neural Architecture Search, https://arxiv.org/abs/1902.09635.” Accessed October 28, 2025.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.