Free Microsoft AZ-700 Practice Test Questions MCQs
Stop wondering if you're ready. Our Microsoft AZ-700 practice test is designed to identify your exact knowledge gaps. Validate your skills with Designing and Implementing Microsoft Azure Networking Solutions questions that mirror the real exam's format and difficulty. Build a personalized study plan based on your free AZ-700 exam questions mcqs performance, focusing your effort where it matters most.
Targeted practice like this helps candidates feel significantly more prepared for Designing and Implementing Microsoft Azure Networking Solutions exam day.
21810+ already prepared
Updated On : 3-Mar-2026181 Questions
Designing and Implementing Microsoft Azure Networking Solutions
4.9/5.0
Topic 1: Litware. Inc Case Study 1
You need to recommend a connectivity method between the two instances. the solution must minimize the latency of the replication traffic.
A. ExpressRoute
B. VPN Gateway
C. Azure Virtual Network Peering
D. Azure Front Door
Summary:
This question involves selecting a connectivity method optimized for low-latency replication traffic between two Azure Virtual Networks. Replication traffic, often high in volume, is sensitive to latency and requires a direct, high-bandwidth path. The solution must prioritize performance within Azure's backbone network over internet-based or WAN-optimized connections.
Correct Option:
C. Azure Virtual Network Peering:
This is the correct recommendation. VNet Peering connects two virtual networks in the same region via Azure's low-latency, high-bandwidth backbone infrastructure. The traffic is private and never traverses the public internet, which directly minimizes latency. It is the most performant and secure method for inter-VNet communication within a region, making it ideal for replication workloads.
Incorrect Option:
A. ExpressRoute:
While ExpressRoute provides a private, high-throughput connection from on-premises to Azure, it is not the optimal solution for traffic between two Azure VNets. Using it for this purpose would route traffic out of the Azure backbone to your on-premises network and back, introducing unnecessary hops and latency compared to the direct path of VNet Peering.
B. VPN Gateway:
A VPN Gateway uses an encrypted tunnel over the public internet. While secure, internet-based routing is inherently less reliable and has higher latency and potential for jitter compared to Azure's dedicated private backbone, making it unsuitable for minimizing replication traffic latency.
D. Azure Front Door:
This is a content delivery network (CDN) and application accelerator service designed for optimizing web traffic to global users. It is not a general-purpose network connectivity solution and is completely inappropriate for backend replication traffic between virtual networks.
Reference:
Microsoft Learn: Virtual network peering
You have Azure App Service apps in the West US Azure region as shown in the following table.

You need to ensure that all the apps can access the resources in a virtual network named Vnet1 without forwarding traffic through the internet-How many integration subnets should you create?
A.
0
B.
1
C.
3
D.
4
E.
6
3
Summary:
This question is about configuring Regional Virtual Network Integration for Azure App Service apps. The key factor is that an App Service plan and all apps running within it share the same VNet integration subnet. The goal is to connect all apps to Vnet1 without using the public internet, which VNet Integration achieves by delegating a dedicated subnet.
Correct Option:
C. 3:
You should create three integration subnets. The requirement is driven by the App Service Plan, not the individual apps. Each App Service Plan requires its own dedicated integration subnet for Regional VNet Integration. Since there are three distinct App Service Plans (ASP1, ASP2, ASP3), you need three separate subnets delegated to "Microsoft.Web/serverFarms" to enable this private connectivity for all apps.
Incorrect Option:
A. 0:
Zero subnets would mean VNet Integration is not configured. The apps would have no direct, private route to Vnet1, and any access would likely rely on public endpoints or traverse the internet, violating the requirement.
B. 1:
A single integration subnet can only be used by one App Service Plan. If you tried to use one subnet for all three plans, the configuration would fail for two of them, as an integration subnet is exclusively allocated to a single plan.
D. 4:
Creating four subnets is unnecessary. There are only three App Service Plans to integrate. The number of individual app instances (3+3+2+1=9) or the number of separate apps (4) is irrelevant for determining the number of required integration subnets.
E. 6:
This number has no basis in the scenario. It is not derived from the number of apps, instances, or plans and would result in wasted IP address space and unnecessary management overhead.
Reference:
Microsoft Learn: Regional Virtual Network Integration
You have an internal Basic Azure Load Balancer named LB1 That has two frontend IP addresses. The backend pool of LB1 contains two Azure virtual machines named VM1 and VM2.
You need to configure the rules on LB1 as shown in the following table.

What should you do for each rule?
A.
Enable Floating IP.
B.
Disable Floating IP.
C.
Set Session persistence to Enabled.
D.
Set Session persistence to Disabled
Enable Floating IP.
Summary:
This question involves configuring load balancing rules to direct traffic from specific frontend IP/port combinations to specific backend virtual machines on unique ports. This is a classic scenario for port forwarding, which requires a direct, unmapped flow of traffic from the frontend to the backend, preserving the original destination IP and port in the packet.
Correct Option:
A. Enable Floating IP:
This is the correct configuration for both rules. Enabling Floating IP (also known as Direct Server Return) changes the behavior of the load balancer's SNAT (Source Network Address Translation). For a HA Ports rule or a specific port rule, it preserves the frontend IP address as the destination IP in the packet sent to the VM. This is essential for port forwarding scenarios where the backend VM must see the original target IP and port to accept the traffic, as is required here to direct traffic for Frontend IP 55201 to VM1 on port 80 and 55202 to VM2 on port 80.
Incorrect Option:
B. Disable Floating IP:
This is the default state. With Floating IP disabled, the load balancer performs SNAT, rewriting the destination IP of the packet to the VM's private IP. The VM would see traffic destined for its own IP on port 80 from both frontend IPs, making it impossible to distinguish and create the specific port-forwarding paths required by the table.
C. Set Session persistence to Enabled:
Session persistence (or session affinity) ensures a client's subsequent requests are sent to the same backend VM. However, it does not change the fundamental NAT behavior. It cannot map one specific frontend IP/port to a specific backend VM/port, which is the core requirement here. It operates on a client's source IP, not the load balancer's frontend IP.
D. Set Session persistence to Disabled:
While session persistence should typically be disabled for this kind of configuration, this setting alone is insufficient. Disabling it prevents client affinity but does not enable the critical port-forwarding functionality, which is exclusively provided by enabling Floating IP.
Reference:
Microsoft Learn: Load Balancer outbound rules (This document explains SNAT and the impact of enabling Floating IP, which is the underlying mechanism for this port-forwarding behavior).
You have an Azure virtual network named Vnet1 that contains two subnets named Subnet1 and Subnet2. You have the NAT gateway shown in the NATgateway1 exhibit, (Click the NATgateway1 tab)

Subnet1 is configured as shown in the Subnet1 exhibit, (Click the Subnet1 tab)

Summary:
This question tests the configuration and behavior of an Azure NAT Gateway. The key points are that a NAT Gateway must be associated with a subnet to provide outbound connectivity, and it uses the public IP addresses from its associated Public IP Prefix for outbound connections, providing a shared source IP for all VMs in the associated subnet.
Correct Option:
Statements Yes No
VM1 can communicate outbound by using NATgateway1. ✅ ○
The virtual machines in Subnet2 communicate outbound by using NATgateway1. ○ ✅
All the virtual machines that use NATgateway1 to connect to the internet use the same public IP address. ✅ ○
Explanation of Answers:
VM1 can communicate outbound by using NATgateway1. (Yes)
Correct:
The Subnet1 exhibit shows that NATgateway1 is explicitly associated with Subnet1. The VM1 exhibit confirms that VM1 is located in Vnet1/Subnet1. Therefore, VM1's outbound internet traffic will automatically be routed through and use the NAT gateway for its outbound connections.
The virtual machines in Subnet2 communicate outbound by using NATgateway1. (No)
Incorrect:
The NATgateway1 exhibit shows it is associated with only 1 subnet. The Subnet1 exhibit confirms that this single associated subnet is Subnet1. There is no indication that Subnet2 is associated with any NAT gateway, let alone NATgateway1. Therefore, VMs in Subnet2 do not use this gateway for outbound communication.
All the virtual machines that use NATgateway1 to connect to the internet use the same public IP address. (Yes)
Correct:
The NATgateway1 exhibit shows it uses a Public IP prefix (not individual addresses). A NAT Gateway uses all the IP addresses within its assigned Public IP Prefix for outbound SNAT. However, from the perspective of an external service, traffic from a specific VM will consistently appear to come from a single IP address from that pool. Furthermore, the statement is true in its practical interpretation: all outbound connections from VMs in Subnet1 will source from the shared pool of IPs provided by the prefix, giving them a shared, predictable set of public IPs, as opposed to each VM having a unique, dynamic IP.
Reference:
Microsoft Learn: What is Azure Virtual Network NAT?
You have an Azure firewall shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.


Summary:
This question assesses two specific features of an Azure Firewall: Forced Tunneling and management by Azure Firewall Manager. The configuration of these features can be determined directly from the information presented in the Essentials panel of the firewall's resource overview.
Correct Option:
On Firewall1, forced tunneling: is disabled but can be enabled
On Firewall1, management by Azure Firewall Manager: Is enabled already
Explanation of Answers:
Forced Tunneling:
is disabled but can be enabled
Correct:
The exhibit shows a "Management public IP" is assigned and there is no "Private IP Ranges" configuration. Forced Tunneling requires all traffic, including management, to be routed to an on-premises network via a Site-to-Site VPN or ExpressRoute. This forces the removal of the management public IP and mandates the configuration of private IP ranges. Since the firewall currently has a management public IP and no private ranges, forced tunneling is not active. However, it is a configurable option that can be enabled during deployment or by redeploying the firewall.
Management by Azure Firewall Manager: Is enabled already
Correct:
The exhibit explicitly states "Firewall policy: FirewallPolicy1". A Firewall Policy is a standalone resource that is created and managed within Azure Firewall Manager. The presence of an associated Firewall Policy is the primary indicator that this firewall is being managed by Azure Firewall Manager. The note "Vish Azure Firewall Manager to configure and manage this firewall" at the top of the blade further confirms this active state.
Reference:
Microsoft Learn: Azure Firewall Forced Tunneling
Microsoft Learn: Azure Firewall Manager policy overview
You have an Azure private DNS zone named contoso.com that is linked to the virtual networks shown in the following table.

The links have auto registration enabled.
You create the virtual machines shown in the following table.


Summary:
This question tests the behavior of auto-registration and manual records in an Azure Private DNS zone. Auto-registration automatically creates and manages A records for VMs in linked virtual networks, while manual records are static and must be managed separately. The resolution of a name depends on which record is present and how it was created.
Correct Option:
Statements Yes No
VM2 will resolve vm1.contoso.com to 10.1.10.10. ○ ✅
Deleting VM1 will delete all VM1 records automatically. ○ ✅
If VM3 obtains a different IP address from Azure, VM3's DNS record is updated automatically. ✅ ○
Explanation of Answers:
VM2 will resolve vm1.contoso.com to 10.1.10.10. (No)
Incorrect:
The auto-registration process for VM1 would have created a record for vm1.contoso.com pointing to 10.1.10.10. However, you manually created a record with the same name (vm1.contoso.com) pointing to 10.1.10.9. Manual records take precedence over auto-registered records. Therefore, when VM2 resolves vm1.contoso.com, it will receive the IP address from the manual record, which is 10.1.10.9, not 10.1.10.10.
Deleting VM1 will delete all VM1 records automatically. (No)
Incorrect:
When you delete VM1, the auto-registered A record (for 10.1.10.10) will be automatically deleted by the Azure platform. However, the manually created A record (for 10.1.10.9) is a separate, standalone resource in the DNS zone. It is not tied to the lifecycle of the VM and will persist until you manually delete it.
If VM3 obtains a different IP address from Azure, VM3's DNS record is updated automatically. (Yes)
Correct:
VM3 is in Vnet2, which is linked to the contoso.com zone with auto-registration enabled. VM3's A record was created automatically by this process. The Azure platform is responsible for managing the lifecycle of auto-registered records. If the VM's IP address changes (e.g., due to a stop/deallocate and restart), the Azure DNS service will automatically update the A record in the private zone to reflect the new IP address.
Reference:
Microsoft Learn: Auto-registration in Azure Private DNS
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure application gateway that has Azure Web Application Firewall (WAF) enabled.
You configure the application gateway to direct traffic to the URL of the application gateway.
You attempt to access the URL and receive an HTTP 403 error. You view the diagnostics log and discover the following error.

You need to ensure that the URL is accessible through the application gateway. Solution: You configure a custom cookie and an exclusion rule.
Does this meet the goal?
A. Yes
B. No
Summary:
The Azure Application Gateway with WAF enabled blocks access to the URL, resulting in an HTTP 403 error. The diagnostics log shows the request is blocked by rule 942100 (SQL Injection Attack: Common Injection Testing Detected) due to the "request" parameter in the query string, which WAF misidentifies as a potential SQL injection pattern. Configuring a custom cookie does not address the query string issue.
Correct Option:
B. No
The error is triggered by a WAF rule (942100) detecting suspicious patterns in the query string parameter "request", not related to cookies or session handling.
Adding a custom cookie has no impact on query string inspection or SQL injection rule evaluation.
An exclusion rule for the "request" parameter in the query string is needed to bypass false positives, but the proposed solution only mentions a custom cookie, which is irrelevant and insufficient.
Incorrect Option:
A. Yes
This option incorrectly assumes that configuring a custom cookie resolves WAF blocking. Cookies are used for session affinity or tracking, not for bypassing OWASP CRS rules like 942100.
The root cause is the query string parameter triggering a false positive SQL injection detection, requiring an exclusion list entry for Args:request or similar, not cookie manipulation.
Without addressing the actual blocked parameter, access remains denied, making this solution ineffective.
Reference:
https://learn.microsoft.com/en-us/azure/web-application-firewall/ag/application-gateway-crs-rulegroups-rules
https://learn.microsoft.com/en-us/azure/web-application-firewall/ag/web-application-firewall-troubleshoot
You have an Azure subscription that contains the public IPv4 addresses shown in the following table.

You plan to create a load balancer named LB1 that will have the following settings:
* Name: LB1
* Location: West US
* Type: Public
* SKU: Standard
Which public IPv4 addresses can be used by LB1?
A.
IP1 and IP3 only
B.
IP3 only
C.
IP3 and IP5 only
D.
IP2only
E.
IP1, IP2. IP3. IP4. and IP5
F.
IP1, IP3, IP4, and 1P5 only
IP3 only
Summary:
This question focuses on the compatibility rules between Azure Load Balancer SKUs and Public IP SKUs. A fundamental design principle is that Standard SKU load balancers require Standard SKU Public IP addresses. Furthermore, both resources must reside in the same Azure region and support the same capabilities, such as availability zones.
Correct Option:
B. IP3 only:
This is the correct answer. LB1 is a Standard SKU Public Load Balancer in the West US region. It can only use a Standard SKU Public IP address that is also in the West US region. IP3 is the only IP address that meets both these critical criteria (SKU: Standard, Location: West US).
Incorrect Option:
A. IP1 and IP3 only:
IP1 is a Basic SKU Public IP. A Standard Load Balancer cannot use a Basic Public IP. The SKUs are incompatible.
C. IP3 and IP5 only:
While both IP3 and IP5 are Standard SKU, IP5 is located in the West US 2 region. A load balancer's frontend IP configuration must use a Public IP address that exists in the same region as the load balancer itself.
D. IP2 only:
IP2 is a Basic SKU Public IP and also has a Dynamic IP address assignment. Standard Public IPs, which are required for Standard Load Balancers, only support Static assignment. IP2 fails on both SKU and assignment type.
E. IP1, IP2, IP3, IP4, and IP5:
This is incorrect because it includes Basic SKU IPs (IP1, IP2, IP4) and an IP in the wrong region (IP5). Only a Standard SKU IP in the West US region is valid.
Reference:
Microsoft Learn: Azure Load Balancer SKUs
You have an Azure subscription that contains a virtual network named Vnetl. Vnetl has a /24 IPv4 address space.
You need to subdivide Vnet1. The solution must maximize the number of usable subnets.
What is the maximum number of IPv4 subnets you can create, and how many usable IP addresses will be available per subnet? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.


Summary:
This question involves subnetting a /24 network to maximize the number of subnets. A /24 network provides 256 total IP addresses. To create the maximum number of subnets, you must use the smallest possible subnet mask, which is /30 for Azure (as /31 is not supported). A /30 subnet provides 4 addresses, 5 of which are usable for Azure resources.
Correct Option:
IPv4 subnets: 64
Usable IP addresses: 1
Explanation of Answers:
Maximum number of IPv4 subnets: 64
A /24 address space has 256 total addresses. To maximize the number of subnets, you use the smallest subnet size Azure supports, which is a /30. The difference between a /24 and a /30 is 6 bits (30 - 24 = 6). The number of subnets is 2^6 = 64.
Usable IP addresses per subnet: 1
A /30 subnet provides 4 total IP addresses.
In any subnet, Azure reserves 5 IP addresses. The first 4 addresses are reserved for protocol conformance, and the last address is reserved for the network's broadcast address. Therefore, the calculation for a /30 subnet is: 4 total addresses - 5 reserved addresses = -1. Since you cannot have a negative number of addresses, the practical rule is that a /30 subnet in Azure provides only 1 usable IP address for a resource like a virtual machine or a firewall.
Reference:
Microsoft Learn: Virtual network subnet - IP addressing - This document confirms the 5 reserved IP addresses per subnet and implies the /30 limitation by stating that a subnet must have a prefix of at least /29, making /30 the smallest possible size.
You have an Azure subscription that contains the resources shown in the following table.

You need to ensure that VM1 and VM2 can connect only to storage1. The solution must meet the following requirements:
• Prevent VM1 and VM2 from accessing any other storage accounts.
• Ensure that storage1 is accessible from the internet.
What should you use?
A.
a network security group (NSG)
B.
a private endpoint
C.
a private link
D.
a service endpoint policy
a service endpoint policy
Summary:
This scenario requires locking down outbound connectivity from two VMs to a single, specific Azure Storage account while maintaining the storage account's public internet accessibility. The key is to use a feature that can enforce access to a designated service resource at the network level, rather than just a service type.
Correct Option:
D. a service endpoint policy:
This is the correct solution. A service endpoint policy can be applied to a subnet via a service endpoint. It allows you to create an allow list of specific Azure service resources (in this case, storage1). When applied to Subnet1, it will permit outbound traffic from VM1 and VM2 only to storage1, while denying access to all other storage accounts. Crucially, it operates at the network layer and does not affect the storage account's firewall, so storage1 can remain accessible from the internet.
Incorrect Option:
A. a network security group (NSG):
An NSG uses rules based on IP addresses, ports, and service tags. While you could use the Storage.WestUS service tag, this would allow access to all storage accounts in the region, not just storage1. You cannot create an NSG rule that specifies a single storage account by its resource name, making it too broad for this requirement.
B. a private endpoint:
A private endpoint assigns a private IP address from the VNet to storage1. This provides private connectivity but would fundamentally change the access model. To enforce access only through the private endpoint, you would need to disable all other network access (including public internet access) to the storage account, which violates the requirement that storage1 must remain accessible from the internet.
C. a private link:
Private Link is the service that provides the capability to create a private endpoint. It is not the direct policy control mechanism. The same critical issue applies: using Private Link/private endpoints is designed for private access and typically requires blocking public access to the service, which contradicts the requirement for internet accessibility.
Reference:
Microsoft Learn: Azure service endpoint policies
| Page 1 out of 19 Pages |
Designing and Implementing Microsoft Azure Networking Solutions Practice Exam Questions
How MSMCQ.com Practice Questions Secured My AZ-700 Success
What Customers Say About Us
Preparing for the AZ-700 exam—a complex test focused on designing and implementing Azure networking—was a daunting challenge. The breadth of topics, from virtual networks and hybrid connectivity to advanced security and monitoring, required more than just theoretical study. That’s where MSMCQ.com became my secret weapon.
I used Designing and Implementing Microsoft Azure Networking Solutions practice test not for memorization, but as a targeted learning engine. After studying each domain—like Azure VPN Gateway or Private Link I immediately tackled the related AZ-700 exam practice questions. This transformed abstract concepts into real-world scenarios. Each questions detailed explanations clarified subtle distinctions, such as when to choose a VNet peering over a VPN Gateway, or how NSG rules differ from Azure Firewall policies.
Most importantly, the platform exposed the exam’s logic and phrasing. By the time I sat for the real Designing and Implementing Microsoft Azure Networking Solutions exam, the structure felt familiar. I wasn’t just recalling facts; I was confidently applying solutions, having already navigated similar case studies in practice.
MSMCQ.com turned my preparation from passive reading into active problem-solving, directly paving my path to a first-attempt pass.