Topic 5: Misc. Questions
You plan to deploy the backup policy shown in the following exhibit.

Explanation:
The policy shows a daily backup schedule. In Azure Backup, the RPO (how much data you can afford to lose) is determined by the backup frequency. The maximum recovery period (how far back you can restore from) is determined by the longest configured retention rule.
Answers:
1. Virtual machines that are backed up using the policy can be recovered for up to a maximum of:
36 months
Explanation:
The policy shows a yearly retention point is "Not Configured". Therefore, the longest retention is set by the monthly retention rule, which is configured for "For ... Month(s)". Although the exact number is cut off in your snippet, the standard maximum for monthly points in an Azure VM backup policy is 36 months (3 years). 90 days is the daily retention, 26 weeks is for weekly (6 months), and 45 months is not a standard default.
2. The minimum recovery point objective (RPO) for virtual machines that are backed up by using the policy is:
1 day
Explanation:
The RPO is governed by the backup frequency. The policy is configured for Daily backups at 6:00 PM. This means a maximum of 24 hours of data can be lost between backups. Therefore, the minimum RPO is 1 day. The "1 hour" RPO would require a more frequent backup schedule (e.g., hourly).
Step-by-Step Reasoning:
Find Longest Retention:
Examine all retention rules (Daily, Weekly, Monthly, Yearly). Yearly is not set. Monthly is set for "X Month(s)". This is the longest retention period. The standard Azure Backup maximum for monthly points is 36 months.
Determine RPO:
Look at the "Backup frequency" setting. It is set to "Daily". This defines how often backups are taken. The RPO is equal to this interval. Since backups occur once per day, the worst-case data loss is one day's worth.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This tests the "Design a solution for backup and disaster recovery" objective. Specifically, it evaluates the ability to configure and interpret Azure Backup policies for Azure Virtual Machines, understanding the relationship between backup frequency (RPO), retention settings, and recovery time objectives.
You have an on-premises network that uses an IP address space of 172.16.0.0/16. You plan to deploy 25 virtual machines to a new Azure subscription. You identify the following technical requirements:
• All Azure virtual machines must be placed on the same subnet named Subnet1.
• All the Azure virtual machines must be able to communicate with all on-premises servers.
• The servers must be able to communicate between the on-premises network and Azure by using a site-to-site VPN.
You need to recommend a subnet design that meets the technical requirements.
What should you include in the recommendation? To answer, drag the appropriate network addresses to the correct subnets. Each network address may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content
NOTE: Each correct selection is worth one point.

Explanation:
You have two key tasks:
Create Subnet1 for the 25 Azure VMs within the new Azure Virtual Network's (VNet) address space.
Create a GatewaySubnet (a mandatory, reserved subnet for the VPN gateway) within the same VNet address space.
The critical rule is that the Azure VNet's address space must not overlap with the on-premises network (172.16.0.0/16). Therefore, you must use the provided 192.168.0.0/24 range for the overall Azure VNet.
Correct Options:
Subnet1: Network address → 192.168.1.0/27
Why: The VNet's overall address space must be 192.168.0.0/24 (non-overlapping with on-premises 172.16.0.0/16). You need a subnet within this VNet for the 25 VMs. A /27 subnet provides 32 total addresses (30 usable for VMs), which is sufficient for 25 VMs and fits the requirement.
192.168.1.0/27 is a valid subnet within the 192.168.0.0/16 supernet.
Gateway subnet: Network address → 192.168.0.0/27
Why: The GatewaySubnet must be a dedicated subnet named exactly "GatewaySubnet" and must reside within the same VNet address space (192.168.0.0/24). It is recommended to be at least a /27. The address 192.168.0.0/27 is the first valid, non-overlapping subnet within the VNet range and is the logical choice for the gateway.
Why the Other Options are Incorrect:
172.16.0.0/16 & 172.16.1.0/27:
These cannot be used because they overlap with the on-premises network address space (172.16.0.0/16). Using an overlapping range would break IP routing and prevent the site-to-site VPN from functioning correctly.
192.168.0.0/24:
This is the correct overall VNet address space, but it is too large to assign directly to Subnet1 (a subnet must be a smaller CIDR within the VNet). It also cannot be the GatewaySubnet, as the GatewaySubnet must be a subset (e.g., /27) of this VNet space.
Reference (Conceptual Alignment with AZ-305 Skills.
This tests the "Design network solutions" objective, specifically planning IP addressing for hybrid connectivity (site-to-site VPN) and the fundamental rule of non-overlapping IP address spaces between connected networks. It also evaluates knowledge of the mandatory GatewaySubnet requirement.
You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.
You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1. The solution must the following requirements.
* To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
* If the manager does not verify access permission, automatically revoke that permission.
* Minimize development effort.
What should you recommend?
A. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
D. In Azure Active Directory (Azure AD), create an access review of Application1
Explanation:
The core requirement is access review and automated remediation (removal of unverified permissions) for an application, with minimal development effort. This is a standard identity governance task. Building a custom solution with Automation runbooks would require significant development, scheduling, and notification logic.
Correct Option:
D. In Azure Active Directory (Azure AD), create an access review of Application1
Meets All Requirements:
Monthly Email:
Azure AD Access Reviews can be scheduled to run monthly. It automatically sends email notifications to the specified reviewer (the manager).
Automatic Revocation:
You can configure the review to "Auto-apply results to resource". If the manager does not approve/verify the access, the user's permissions to Application1 are automatically revoked at the end of the review period.
Minimizes Development Effort:
This is a fully managed, no-code solution built directly into Azure AD (part of the Identity Governance suite). It requires only configuration in the Azure portal.
Incorrect Options:
A. In Azure AD Privileged Identity Management, create a custom role assignment for the Application1 resources
PIM is for eligible, time-bound assignments of privileged roles (like Global Administrator, User Administrator, or custom high-privilege Azure roles). It is not designed for reviewing and managing standard user access to a regular enterprise application. It also does not generate periodic reports for a manager in the described manner.
B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet & C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
Both options suggest building a custom automated solution. While technically possible, this would require significant development effort to:
Write and test the PowerShell script.
Design and send the email report format.
Implement logic to track non-responses and remove access.
Schedule and maintain the runbook.
This directly violates the "minimize development effort" requirement. Azure AD Access Reviews is the purpose-built service that avoids this custom work.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This directly tests the "Design identity, governance, and monitoring solutions" objective, specifically the "Design a solution for access reviews" subtask. Azure AD Access Reviews is the key managed service for periodic attestation of user access to applications and groups, ensuring compliance with the principle of least privilege through automated workflows.
You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment. The solution must meet the following requirements:
MaAzure Front Doorintain access to the app in the event of a regional outage. Support Azure Web Application Firewall (WAF).
Support cookie-based affinity.
Support URL routing.
What should you include in the recommendation?
A. Azure Front Door
B. Azure Load Balancer
C. Azure Traffic Manager
D. Azure Application Gateway
Explanation:
The app is global (multiple regions) and requires high availability during a regional outage, which necessitates global load balancing and failover. It also needs Layer 7 features (WAF, URL routing, cookie-based affinity). Only a global, HTTP/S-aware load balancer can meet all these requirements simultaneously.
Correct Option:
A. Azure Front Door
Global & Regional Outage Protection:
Front Door is a global Anycast load balancer operating at the edge. It performs automatic failover to healthy backend regions, meeting the regional outage requirement.
Layer 7 Features:
It natively supports Azure WAF (Front Door Premium tier), cookie-based session affinity, and URL path-based routing.
Perfect Fit:
It is the only service that combines global load balancing with full Layer 7 capabilities, including WAF.
Incorrect Options:
B. Azure Load Balancer (Standard):
Scope:
Operates within a single region only. It cannot provide failover to another region during an outage.
Layer:
It is a Layer 4 (TCP/UDP) load balancer. It does not support WAF, cookie-based affinity, or URL routing. This choice fails most requirements.
C. Azure Traffic Manager:
Global DNS:
It provides global failover via DNS, which can handle regional outages.
Critical Limitation:
It is a DNS-based Layer 4 load balancer. It does not support WAF, cookie-based affinity, or URL routing. It fails the application-layer feature requirements.
D. Azure Application Gateway:
Regional Only:
It is a regional Layer 7 load balancer. While it supports WAF, cookie affinity, and URL routing perfectly, it cannot provide cross-region failover by itself. A single Application Gateway instance is bound to one region.
Conclusion:
The official correct answer for this set of requirements is A. Azure Front Door. It is the Azure service designed precisely for this scenario: securing and load balancing global HTTP/S applications with advanced routing, session affinity, and WAF protection.
You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment. The solution must meet the following requirements:
Maintain access to the app in the event of a regional outage.
Support Azure Web Application Firewall (WAF).
Support cookie-based affinity.
Support URL routing.
What should you include in the recommendation?
A. Azure Load Balancer
B. Azure Load Balancer
C. Azure Traffic Manager
D. Azure Application Gateway
Explanation:
The question requires a load balancing service that can operate across multiple regions, provide high availability during a regional outage, support Web Application Firewall (WAF), enable cookie-based affinity (session stickiness), and perform URL-based routing. A service that works at Layer 7 (HTTP/HTTPS), integrates with WAF, and can route traffic intelligently based on URLs while maintaining user sessions is required. Traditional network load balancers lack these advanced features, so an application-layer solution is necessary.
Correct Option:
B — Azure Application Gateway
Azure Application Gateway is a Layer 7 (HTTP/HTTPS) load balancer that supports Web Application Firewall (WAF), cookie-based session affinity, and URL-based routing, making it suitable for multi-region App Service deployments. It can distribute traffic intelligently based on request paths, host headers, or URLs while maintaining user sessions through affinity cookies. When combined with multi-region deployment and appropriate DNS or front-door services, it helps maintain application availability during regional outages.
Incorrect Option:
A — Azure Load Balancer
Azure Load Balancer operates at Layer 4 (TCP/UDP) and does not support Web Application Firewall, URL-based routing, or cookie-based affinity. It is designed for high-performance network-level traffic distribution rather than application-layer inspection or routing. Therefore, it cannot meet the requirements of handling HTTP-based routing or security policies needed for this scenario.
C — Azure Traffic Manager
Azure Traffic Manager is a DNS-based traffic routing service, not a load balancer, and does not provide Web Application Firewall or cookie-based affinity. It can direct users to different regions based on routing methods but lacks application-layer features such as URL routing and session persistence. It must be paired with another load balancer and therefore does not fully satisfy the requirements on its own.
D — Azure Load Balancer (duplicate option)
This option is the same as Option A and similarly fails to meet the requirements because it does not support WAF, URL routing, or cookie-based affinity. It is only suitable for basic network load balancing and not for application-level traffic management.
Reference:
Microsoft Learn — Azure Application Gateway documentation (Load balancing, WAF, URL routing, and session affinity features).
Your company plans to publish APIs for its services by using Azure API Management.
You discover that service responses include the AspNet-Version header.
You need to recommend a solution to remove AspNet-Version from the response of the published APIs.
What should you include in the recommendation?
A. a new product
B. a modification to the URL scheme
C. a new policy
D. a new revision
Explanation:
The requirement is to modify the HTTP response sent back to the client—specifically, to remove a header (AspNet-Version) that is being added by the backend service. This is a transformation operation on the outbound response. In APIM, all such transformations (modifying requests/responses, routing, authentication, etc.) are implemented using XML-based policies that are applied at various scopes (global, product, API, operation).
Correct Option:
C. a new policy
Mechanism: You would create and apply an outbound policy at the appropriate scope (likely at the API or global level) using the
Incorrect Options:
A. a new product:
A "product" in APIM is a grouping and access control mechanism for APIs. It defines which APIs developers can subscribe to and their usage quotas/terms. It has no functionality for modifying HTTP headers in responses.
B. a modification to the URL scheme:
Changing the URL scheme (e.g., from HTTP to HTTPS, or modifying the path structure) affects how clients call the API endpoint, not how the API responds. It does not alter the content or headers of the HTTP response.
D. a new revision:
A "revision" in APIM is a versioning and lifecycle management feature. It allows you to make non-breaking changes to an API configuration in an isolated draft, then test it before making it current. While you would apply the new policy within a revision, the revision itself is not the solution; the policy is the solution that the revision contains.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This tests the "Design a solution for API integration" objective, specifically the capabilities of Azure API Management. It evaluates the understanding that APIM policies are the primary tool for intercepting and transforming requests and responses between the consumer and the backend API.
You are designing an Azure Cosmos DB solution that will host multiple writable replicas in multiple Azure regions.
You need to recommend the strongest database consistency level for the design. The solution must meet the following requirements:
Provide a latency-based Service Level Agreement (SLA) for writes.
Support multiple regions.
Which consistency level should you recommend?
A. bounded staleness
B. strong
C. session
D. consistent prefix
Explanation:
The design specifies multiple writable replicas across multiple regions. This is key. The "strongest" consistency level is Strong, but it has a major performance trade-off for global writes: extremely high write latency and unavailability during regional outages. The requirement for a "latency-based SLA for writes" explicitly rules out Strong consistency in a multi-write region setup.
Correct Option:
A. bounded staleness
Strongest with an SLA:
Bounded staleness is the strongest consistency level that can provide a latency-based SLA for writes in a multi-write region configuration.
How it works:
It guarantees that reads lag behind writes by at most a configured number of versions (K) or a time interval (T). This offers a predictable, bounded staleness—a very strong guarantee—while still allowing acceptable write latencies across regions.
Balances Requirements:
It supports multiple writable regions and provides a strong, quantifiable guarantee (bounded lag), making it the correct choice when you need the strongest possible consistency while also maintaining a write latency SLA.
Incorrect Options:
B. strong:
While this is the absolute strongest consistency level (linearizability), it cannot provide a low-latency write SLA in a multi-write region configuration. Strong consistency requires synchronous, global quorum commits, making writes very slow and vulnerable to regional network issues. It contradicts the "latency-based SLA" requirement.
C. session:
This is a common default consistency level. It guarantees monotonic reads/writes and read-your-own-writes within a single client session. While it provides good performance and supports multi-region writes, it is not the "strongest" level. Its guarantees are scoped to a session, not global.
D. consistent prefix:
This is one of the weaker consistency levels. It only guarantees that reads will see writes in the order they were made, without any gaps (no out-of-order results). It does not guarantee you will read the latest write. It is not the strongest level.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This question tests the "Design a data storage solution" objective, specifically selecting the appropriate Azure Cosmos DB consistency level. It evaluates the critical trade-off between consistency strength, latency, availability, and throughput, especially in a multi-region write architecture.
Your company has offices in the United States, Europe, Asia, and Australia.
You have an on-premises app named App1 that uses Azure Table storage. Each office hosts a local instance of App1.
You need to upgrade the storage for App1. The solution must meet the following requirements:
Enable simultaneous write operations in multiple Azure regions.
Ensure that write latency is less than 10 ms.
Support indexing on all columns.
Minimize development effort.
Which data platform should you use?
A. Azure SQL Database
B. Azure SQL Managed Instance
C. Azure Cosmos DB
D. Table storage that uses geo-zone-redundant storage (GZRS) replication
Explanation:
The key is that App1 is an existing application already built for Azure Table storage. The requirements are for global scale (simultaneous writes in multiple regions) and low latency (<10ms). The critical factor is to minimize development effort. Changing the underlying data platform (e.g., to Cosmos DB or SQL) would require a significant application rewrite.
Correct Option:
D. Table storage that uses geo-zone-redundant storage (GZRS) replication
Minimizes Development Effort (Critical):
This option requires zero code changes. The application continues to use the exact same Azure Table Storage API. You simply enable geo-replication on the storage account.
Enables Multi-Region Writes:
With a geo-redundant storage (GRS/GZRS) account, write operations occur in the primary region, and data is replicated asynchronously to the secondary region. While not "simultaneous writes" in the active-active sense, this meets the common interpretation for disaster recovery and read-access from secondary regions.
Low Latency:
Table storage offers very low latency (<10ms) for writes to its primary region.
Important Caveat:
"Simultaneous writes" might be interpreted as active-active writes. Standard GRS does not provide this; for that, you'd need Cosmos DB Table API. However, given the "minimize development effort" as the top priority and the existing app using Table storage, GRS is the pragmatic upgrade path requiring no code changes.
Incorrect Options:
A. Azure SQL Database & B. Azure SQL Managed Instance:
High Development Effort:
Migrating from a NoSQL key-value store (Table storage) to a relational SQL database would require a complete schema redesign and significant application logic changes, violating the "minimize development effort" requirement.
Latency & Global Writes:
While they offer global distribution via active geo-replication, it's complex and may not guarantee <10ms writes globally.
C. Azure Cosmos DB:
Cosmos DB Table API is the technically superior fit for the functional requirements (global low-latency writes, indexing). However, even using the Table API (which is wire-compatible), there can be SDK and feature parity differences. The migration would still require some testing and validation. The question's strongest constraint is "minimize development effort," and simply enabling geo-replication on the existing Table Storage account is the least effort path that still provides a significant upgrade in durability and availability.
Why the Distinction is Important for the Exam:
This question tests the "Design a data storage solution" and "Design a solution for migration" objectives. It emphasizes that the simplest, most cost-effective solution that meets the core business requirements is often the correct choice, especially when minimizing rework is a key constraint. While Cosmos DB is the more powerful platform, the zero-change upgrade path with Table Storage + GRS is the pragmatic recommendation here.
The accounting department at your company migrates to a new financial accounting software. The accounting department must keep file-based database backups for seven years for compliance purposes. It is unlikely that the backups will be used to recover data.
You need to move the backups to Azure. The solution must minimize costs.
Where should you store the backups?
A. Azure Blob storage that uses the Archive tier
B. Azure SQL Database
C. Azure Blob storage that uses the Cool tier
D. a Recovery Services vault
Explanation:
The backups are cold data: required to be kept for 7 years for compliance, with a very low likelihood of access ("unlikely that the backups will be used to recover data"). The primary objective is to minimize cost. This is the classic use case for the Archive storage tier, which offers the lowest storage cost in exchange for a high retrieval latency and cost.
Correct Option:
A. Azure Blob storage that uses the Archive tier
Minimizes Costs:
The Archive tier has the lowest storage cost per GB in Azure Blob Storage. It is specifically designed for data that is rarely accessed and can tolerate several hours of retrieval latency (rehydration time).
Fits the Compliance Scenario:
Storing backups for 7 years with minimal access is the exact intended purpose of the Archive tier. It is far cheaper than the Cool tier for data that will sit untouched for years.
Correct Service:
Blob storage is the appropriate, scalable object storage service for file-based backups.
Incorrect Options:
B. Azure SQL Database:
This is a relational database service for transactional, frequently accessed data. It is not a cost-effective destination for dumping file-based backups. The storage costs would be extremely high compared to Blob Storage, and the operational overhead is unnecessary.
C. Azure Blob storage that uses the Cool tier:
The Cool tier is for data accessed infrequently (less than once per 30-90 days). While cheaper than Hot storage, it is still significantly more expensive than the Archive tier for data that will be stored for years without access. Choosing Cool over Archive for this scenario would fail the "minimize costs" requirement.
D. a Recovery Services vault:
Recovery Services vaults are the managed backup target for Azure-native services like Azure VMs, SQL in VMs, and Azure Files. While cost-effective for operational backups of those services, they are not the correct service for storing arbitrary, file-based database backups from an on-premises application. Using a vault for this purpose would be an architectural mismatch and likely more expensive than Blob Archive.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This tests the "Design a solution for backup and recovery" objective, specifically the ability to select the most cost-effective Azure Storage tier (Hot, Cool, Archive) based on data access frequency and retention requirements. It emphasizes that for long-term archival and compliance, the Archive tier is the optimal choice.
You plan to develop a new app that will store business critical data. The app must meet the following requirements:
• Prevent new data from being modified for one year.
• Maximize data resiliency.
• Minimize read latency.
What storage solution should you recommend for the app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Explanation:
The requirements are clear:
Prevent modification for 1 year → This requires immutable storage, which is a feature available on Blob Storage in General-purpose v2 and Premium block blob accounts.
Maximize data resiliency → This means choosing the highest redundancy option available for the selected account type.
Minimize read latency → This strongly points to using Premium performance storage, which uses SSDs and offers single-digit millisecond latency.
Correct Options:
Storage Account type: Premium block blobs
Why: "Premium block blobs" is a Premium performance account type optimized for block blobs. It provides the lowest read latency (critical requirement). It also supports immutable storage policies (legal hold or time-based retention policies), fulfilling the one-year immutability need.
Redundancy: Zone-redundant storage (ZRS)
Why: For a Premium block blob account, the available redundancy options are LRS and ZRS. To "maximize data resiliency," you must select the higher option, which is ZRS. ZRS replicates data synchronously across three Availability Zones within the primary region, providing high availability and protection from a zone failure. (RA-GRS is not available for Premium tier accounts).
Why the Other Options are Incorrect:
Storage Account type:
Standard general-purpose v1:
This legacy tier does not support immutable blob storage or many modern features.
Standard general-purpose v2:
While it supports immutable storage and offers geo-redundancy (GRS/RA-GRS), it uses standard performance (HDD/Standard SSD). This results in higher read latency compared to Premium storage, failing the "minimize read latency" requirement.
Redundancy:
Locally-redundant storage (LRS):
Only replicates data three times within a single datacenter. It offers the lowest resiliency and does not "maximize data resiliency."
Read-access geo-redundant storage (RA-GRS):
While this offers higher resiliency by replicating to a secondary region, it is not available for Premium performance tier accounts. Premium tier only offers LRS and ZRS within the primary region.
Reference (Conceptual Alignment with AZ-305 Skills Outline):
This tests the "Design a data storage solution" objective, specifically:
Selecting storage account types (Standard vs. Premium) based on performance (latency) requirements.
Choosing a redundancy option (LRS, ZRS, GRS) based on resiliency requirements.
Applying data governance features like immutable storage for compliance and data protection scenarios.
| Page 3 out of 28 Pages |