Free Microsoft AZ-120 Practice Test Questions MCQs

Stop wondering if you're ready. Our Microsoft AZ-120 practice test is designed to identify your exact knowledge gaps. Validate your skills with Planning and Administering Microsoft Azure for SAP Workloads (beta) questions that mirror the real exam's format and difficulty. Build a personalized study plan based on your free AZ-120 exam questions mcqs performance, focusing your effort where it matters most.

Targeted practice like this helps candidates feel significantly more prepared for Planning and Administering Microsoft Azure for SAP Workloads (beta) exam day.

22180+ already prepared
Updated On : 3-Mar-2026
218 Questions
Planning and Administering Microsoft Azure for SAP Workloads (beta)
4.9/5.0

Page 1 out of 22 Pages

Topic 1: Litware, inc Case Study

Before putting the SAP environment on Azure into production, which command should you run to ensure that the virtual machine disks meet the business requirements? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.




Explanation:
When deploying SAP workloads on Azure, it's critical to verify that virtual machine disks use the appropriate storage types to meet performance and reliability requirements. For production SAP environments, disks must be Premium SSD or Standard SSD (not HDD-based) to ensure adequate IOPS and throughput. The question asks which command and disk type combination should be used to validate disk compliance before production deployment.

Correct Option:

Command: Get-AzDisk with filtering on resource group "SAPProduction"
The Get-AzDisk cmdlet retrieves all managed disks in a subscription or resource group. By piping results to Where ($_.sku.Name -ne "") and filtering for the SAPProduction resource group, you can identify disks that don't meet Premium SSD or Standard SSD requirements. This command shows actual disk configurations including SKU types, allowing verification that no Standard HDD (Standard_LRS) disks exist in the production environment.

Disk Type: Premium_LRS and StandardSSD_LRS
For SAP production workloads, Azure recommends Premium SSD (Premium_LRS) for operating system disks and SAP application binaries, and Standard SSD (StandardSSD_LRS) for less critical data. Both provide solid-state storage performance necessary for SAP's I/O requirements. Standard HDD (Standard_LRS) and Read-Access Geo-Redundant Storage (Standard_RAGRS) are unsuitable for production SAP due to latency and performance limitations.

Incorrect Option:

Command: Get-AzVM and Get-AzVMImage
Get-AzVM retrieves virtual machine configurations but doesn't directly show disk SKU types - it only shows VM properties and attached disk IDs. You would need additional commands to examine disk details. Get-AzVMImage shows available publisher images but doesn't provide information about deployed disk configurations. Neither command effectively validates actual disk SKU compliance.

Disk Type: Standard_LRS and Standard_RAGRS
Standard_LRS (Standard HDD) is magnetic media with high latency and low IOPS, unsuitable for SAP production workloads that require consistent performance. Standard_RAGRS is read-access geo-redundant storage typically used for backup or archival purposes, not for production VM disks. Both would cause performance issues in SAP environments.

Reference:

Microsoft Docs: "SAP workloads on Azure: planning and deployment checklist"

Microsoft Docs: "Azure storage types for SAP workload"

You need to provide the Azure administrator with the values to complete the Azure Resource Manager template.

Which values should you provide for diskCount, StorageAccountType, and domainName? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.




Explanation:
When deploying SAP environments using ARM templates, specific configuration values must be provided based on SAP requirements and Azure architecture. For high availability deployments, disk counts follow SAP guidelines for separating binaries, data, and logs. Storage account types determine performance tiers, while domain names must align with the identity management strategy. The correct selections ensure optimal SAP performance, data redundancy, and proper Active Directory integration.

Correct Option:

diskCount: 4
SAP on Azure best practices recommend separating different workload types onto dedicated disks. A typical SAP deployment requires at least 4 disks: OS disk (1), SAP binaries/executables (1), database data files (1), and database log files (1). For high-performance scenarios like SAP HANA, even more disks may be used for multiple data volumes and log backups. This separation optimizes I/O performance and simplifies management.

storageAccountType: Premium_LRS
Premium_LRS provides consistent low-latency performance essential for SAP production workloads, especially for database files and transaction logs. While Standard SSD may work for development, production SAP requires Premium SSD for data and log disks. Premium_LRS delivers the IOPS and throughput needed for SAP's demanding I/O patterns, with locally redundant storage sufficient for most deployments since data protection comes from database-level replication.

domainName: contoso.com
For on-premises integration and hybrid scenarios, the custom domain name (contoso.com) is correct because SAP deployments typically require integration with existing Active Directory. The .onmicrosoft.com domain is Azure AD's default domain for cloud-only identities, which lacks the full AD functionality SAP requires. Custom domains enable seamless Kerberos authentication, group policy application, and hybrid identity scenarios essential for enterprise SAP landscapes.

Incorrect Option:

diskCount: 0, 1, 2
Disk counts of 0-2 are insufficient for SAP production workloads. A single disk cannot separate OS, applications, and data, leading to I/O contention and performance degradation. Two disks might separate OS and data but still combine critical components like database files and logs, which have different I/O patterns (random vs. sequential) and should reside on separate disks for optimal performance and recoverability.

storageAccountType: Standard_GRS and Standard_LRS
Standard_GRS (geo-redundant) adds unnecessary complexity and potential performance issues since SAP handles data replication at the application level. Standard_LRS (Standard HDD) lacks the IOPS capacity for production SAP databases. Even Standard SSD is insufficient for most production SAP workloads, particularly for database transaction logs which require low-latency writes. Only Premium SSD meets SAP's performance certification requirements.

domainName: ad.contoso.com, ad.contoso.onmicrosoft.com, and contoso.onmicrosoft.com
ad.contoso.com might be a specific AD server rather than the domain name. ad.contoso.onmicrosoft.com mixes custom subdomain with default Azure AD domain incorrectly. contoso.onmicrosoft.com lacks custom domain capabilities needed for full AD integration, preventing seamless hybrid identity scenarios and limiting authentication options. The domain name should match the organization's actual AD domain for proper integration.

Reference:
Microsoft Docs: "SAP NetWeaver on Azure Virtual Machines - Planning and Implementation Guide"

Microsoft Docs: "Azure Premium Storage: Design for high performance"

Microsoft Docs: "Azure AD and custom domain names"

This question requires that you evaluate the underlined BOLD text to determine if it is correct.

You are planning for the administration of resources in Azure.

To meet the technical requirements, you must first implement Active Directory Federation Services (AD FS).

Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

A. No change is needed

B. Azure AD Connect

C. Azure AD join

D. Enterprise State Roaming

B.   Azure AD Connect

Explanation:
The statement claims that to administer Azure resources, you must first implement Active Directory Federation Services (AD FS). This is incorrect because AD FS is not a prerequisite for Azure administration. The foundational requirement for managing Azure resources with on-premises identities is directory synchronization, which is performed by Azure AD Connect. AD FS is an optional federation service used only when specific single sign-on or partner federation scenarios are required, not as the first step.

Correct Option:

B. Azure AD Connect
Azure AD Connect is the correct first implementation for integrating on-premises Active Directory with Azure AD. It synchronizes user identities, enables single sign-on, and provides the option to configure federation if needed. Without Azure AD Connect, on-premises identities cannot be used to administer Azure resources. Many organizations administer Azure effectively using only password hash synchronization, which is a feature of Azure AD Connect, without ever deploying AD FS.

Incorrect Option:

A. No change is needed
This option is incorrect because AD FS is not a mandatory first step. It is an additional service for organizations requiring federated authentication with partners or specific claim-based rules. Implementing AD FS without first establishing identity synchronization through Azure AD Connect would not enable Azure resource administration, as on-premises identities would not be present in Azure AD.

C. Azure AD join
Azure AD join is a device identity feature that registers devices directly with Azure AD. While useful for cloud-native organizations, it does not address the requirement of synchronizing existing on-premises identities. For hybrid environments, identity synchronization via Azure AD Connect must occur first to ensure corporate identities are available in Azure AD before devices can be joined meaningfully.

D. Enterprise State Roaming
Enterprise State Roaming provides users with a consistent experience by syncing Windows settings and application data to Azure. This feature depends on Azure AD and appropriate licensing but is an advanced capability that builds upon an established identity foundation. It is not a first-step implementation for Azure resource administration.

Reference:
Microsoft Docs: "What is Azure AD Connect?" – Explains Azure AD Connect as the tool for hybrid identity.

Microsoft Docs: "Choose the right authentication method for your Azure AD hybrid identity solution" – Describes AD FS as an optional choice.

Microsoft Learn: "Introduction to Azure AD Connect" – Establishes Azure AD Connect as the prerequisite for hybrid identity.

You are planning the Azure network infrastructure to support the disaster recovery requirements.

What is the minimum number of virtual networks required for the SAP deployed?

A. 1

B. 2

C. 3

D. 4

B.   2

Explanation:
For SAP disaster recovery on Azure, the minimum number of virtual networks required depends on the recovery strategy. Azure Site Recovery (ASR) can replicate VMs to another Azure region, but for production SAP workloads, a minimum of two virtual networks is typically needed—one in the primary region and one in the secondary region. This supports cross-region failover while maintaining network isolation and addressing overlapping IP concerns through proper network design.

Correct Option:

B. 2
The minimum number of virtual networks required is two: one in the primary region and one in the secondary (DR) region. Each Azure region requires its own virtual network for SAP deployment. Even with ASR replication, the DR site operates in a different region with its own VNet to ensure complete isolation during failover. This also allows for different IP address spaces if needed, preventing conflicts during recovery.

Incorrect Option:

A. 1
A single virtual network cannot support disaster recovery because all resources would reside in one region. If that region experiences an outage, there is no isolated infrastructure to fail over to. Disaster recovery requires geographic separation, which necessitates separate virtual networks in different regions to maintain availability and data residency requirements.

C. 3
Three virtual networks exceed the minimum requirement. While some enterprises may use additional VNets for dev/test, staging, or multiple environments, the minimum for production DR is two. Three would only be necessary if there were specific requirements like separate VNets for multiple SAP components or third-region backup, but this is not mandated.

D. 4
Four virtual networks are unnecessary for meeting basic disaster recovery requirements. This would imply multiple DR regions or excessive segmentation beyond standard best practices. The minimum viable DR architecture only requires a primary and secondary region VNet, making four an overprovisioned and cost-inefficient choice.

Reference:

Microsoft Docs: "Azure Site Recovery: About networking"

Microsoft Docs: "Disaster recovery for SAP workloads on Azure"

Azure Architecture Center: "SAP HANA Disaster Recovery on Azure"

You are evaluating the proposed backup policy.
For each of the following statements, select Yes if the statement is true. otherwise, select No.
NOTE: Each correct selection is worth one point.




Explanation:
The question evaluates a proposed Azure Backup policy (likely for VMs hosting SAP workloads in AZ-120 context) against technical and business requirements, plus file-level restore capability. Azure VM file-level restore (Item Level Recovery) depends on available recovery points per the policy's retention. Without the specific policy details provided (e.g., daily/weekly backups, retention periods), we assess general feasibility. Technical/business compliance can't be confirmed without seeing the policy. However, restoring a deleted file one year later to a running VM is possible if the policy retains recovery points for at least one year.

Correct Option:

The backup policy meets the technical requirements. → No
Without the actual policy configuration shown (e.g., frequency, retention range), it is impossible to confirm it meets Azure Backup's technical prerequisites for SAP-on-VM workloads, such as supported VM types, OS, disk limits, or proper vault/policy setup. Technical validation requires explicit matching against Azure docs.

The backup policy meets the business requirements. → No
Business requirements (e.g., RPO/RTO, retention SLAs, compliance periods) are not visible here. The policy cannot be evaluated against unspecified business needs like mandatory 1-year retention or daily restore windows.

If the backup policy is implemented, a deleted file can be restored to the running virtual machine one year after the file was deleted. → Yes
Azure Backup supports file-level restore (ILR) from any existing recovery point to the original running VM. If the policy retains recovery points for ≥1 year (common in long-term retention policies using daily/weekly backups + yearly archives), a file deleted 1 year ago remains restorable from an old point-in-time snapshot, assuming no data overwrite and policy compliance.

Incorrect Option:
(The remaining two statements are marked No as explained above.)

Reference:
Microsoft Learn: Tutorial - Restore files to a virtual machine with Azure Backup → https://learn.microsoft.com/en-us/azure/backup/tutorial-restore-files

Which Azure service should you deploy for the approval process to meet the technical requirements?

A. Just in time (JIT) VM access

B. Azure Active Directory (Azure AD) Identity Protection

C. Azure Active Directory (Azure AD) Privileged identity Manager (PIM)

D. Azure Active Directory (Azure AD) conditional access


Explanation:
The question asks which Azure service should be deployed for an approval process to meet technical requirements. When considering identity management and access approval workflows, Azure AD Privileged Identity Management (PIM) is the service specifically designed for time-based and approval-based role activation. PIM enables just-in-time privileged access with approval workflows, making it the appropriate choice for implementing approval processes for administrative roles.

Correct Option:

C. Azure Active Directory (Azure AD) Privileged Identity Manager (PIM)
Azure AD PIM provides time-bound and approval-based role activation for privileged administrative roles. It allows organizations to manage, control, and monitor access to important resources. With PIM, users request activation of privileged roles, and designated approvers must approve these requests before temporary access is granted. This directly addresses the requirement for an approval process, ensuring that privileged access is only granted after proper authorization.

Incorrect Option:

A. Just in time (JIT) VM access
JIT VM access is a feature of Microsoft Defender for Cloud that controls access to virtual machines by opening specific ports for limited time periods. While it involves temporary access, it focuses on network-level access to VMs rather than administrative role approvals. It does not provide an approval workflow for Azure AD roles or broader Azure resource permissions.

B. Azure Active Directory (Azure AD) Identity Protection
Azure AD Identity Protection is a tool for detecting and responding to identity-based risks. It uses machine learning to identify suspicious activities like leaked credentials or impossible travel. While valuable for security, it does not include approval workflows for privileged access and focuses on risk detection rather than access governance.

D. Azure Active Directory (Azure AD) conditional access
Conditional Access is a policy engine that evaluates signals (user, device, location, risk) to grant or block access to applications. It can require multi-factor authentication or compliant devices but does not provide approval workflows for privileged role activation. It's an access control tool, not an approval process tool.

Reference:

Microsoft Docs: "What is Azure AD Privileged Identity Management?"

Microsoft Docs: "Approve or deny requests for Azure AD roles in PIM"

Microsoft Learn: "Manage access with Azure AD Privileged Identity Management"

You are planning replication of the SAP HANA database for the disaster recovery environment in Azure.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.




Explanation:
The question involves planning replication for an SAP HANA database in Azure for disaster recovery. SAP HANA system replication is the native technology used to replicate data from a primary to a secondary system. The statements touch on replication mode, operation mode, and the management of failover, which must align with SAP HANA's capabilities and Azure integration requirements.

Correct Option:

Statement 1: You must use synchronous replication. → No
Synchronous replication ensures zero data loss but introduces latency, making it suitable only for low-latency connections within the same region. For disaster recovery across Azure regions, asynchronous replication is typically used due to higher latency. SAP HANA allows both modes, but synchronous is not mandatory for DR across regions. The choice depends on RPO and RTO requirements.

Statement 2: You must use delta data shipping for operation mode. → No
Delta data shipping is just one of several operation modes in SAP HANA system replication. Others include log replay and log replay with initial full data shipping. The correct mode depends on the specific DR requirements, such as recovery time objectives and whether the secondary system accepts read workloads. Delta data shipping is not mandatory.

Statement 3: You must configure an Azure Active Directory (Azure AD) application to manage the failover. → No
Managing SAP HANA failover in Azure does not require an Azure AD application. Failover can be automated using Azure services like Azure Site Recovery, Azure Load Balancer with floating IP, or manual procedures. Azure AD applications are used for identity and access management, not for orchestrating database failover. SAP HANA's native tools or Azure automation can handle failover without Azure AD.

Reference:

SAP Help Portal: "SAP HANA System Replication"

Microsoft Docs: "Set up SAP HANA system replication in Azure"

Microsoft Docs: "High availability and disaster recovery for SAP HANA on Azure VMs"

Once the migrate completes, to which size should you set the ExpressRoute circuit to the New York office to meet the business goals and technical requirements?

A. 500 Mbps

B. 1,000 Mbps

C. 2,000 Mbps

D. 5,000 Mbps

C.   2,000 Mbps

Explanation:
The question asks for the appropriate ExpressRoute circuit size to the New York office after migration, based on business goals and technical requirements. While the specific requirements aren’t listed here, typical factors include user count, application traffic, replication needs, and future growth. The correct answer is often derived from bandwidth calculations for SAP workloads, which require sufficient throughput for normal operations and disaster recovery traffic.

Correct Option:

C. 2,000 Mbps
2,000 Mbps (2 Gbps) is a common ExpressRoute bandwidth choice for enterprise SAP deployments connecting a major office to Azure. It balances cost with performance, supporting typical user workloads, database replication, and backup traffic. For a New York office—likely a headquarters or large regional hub—2 Gbps provides headroom for growth and ensures low-latency connectivity critical for SAP performance and hybrid scenarios.

Incorrect Option:

A. 500 Mbps
500 Mbps may be insufficient for a major office like New York, especially with SAP workloads requiring consistent throughput for database queries, file transfers, and replication. It could lead to congestion during peak usage or DR events, impacting user experience and business continuity. This size is more suitable for smaller branch offices with limited users.

B. 1,000 Mbps
1,000 Mbps (1 Gbps) might suffice for smaller deployments but could become a bottleneck for a large office with hundreds of SAP users, frequent data transfers, or replication requirements. Without headroom for growth or spikes, it risks performance degradation, making it less ideal for meeting long-term business goals.

D. 5,000 Mbps
5,000 Mbps (5 Gbps) exceeds typical requirements for most SAP deployments and would incur higher costs without proportional benefit. Unless the office has extreme traffic demands—like massive data lakes or real-time analytics—this is overprovisioned. It’s rarely necessary unless specified by unique technical requirements.

Reference:

Microsoft Docs: "ExpressRoute circuits and routing domains"

Microsoft Docs: "Network planning for SAP on Azure"

Azure Architecture Center: "SAP workloads: Planning and deployment"

You have an Azure subscription that contains 10 virtual machines.

You plan to deploy an SAP landscape on Azure that will run SAP HANA.

You need to ensure that the virtual machines meet the performance requirements of HANA.

What should you use?

A. SAP Quick Sizer

B. Azure Advisor

C. ABAP Profiler

D. SAP HANA Hardware and Cloud Measurement Tool (HCMT)

D.   SAP HANA Hardware and Cloud Measurement Tool (HCMT)

Explanation:
When deploying SAP HANA on Azure, it is critical to validate that the underlying virtual machine infrastructure meets SAP's strict performance and sizing requirements for HANA workloads. SAP provides official tools and processes to certify hardware configurations, and Azure offers specific VM types that are SAP HANA certified. Among the available options, only one is designed specifically to measure and validate hardware performance for SAP HANA.

Correct Option:

D. SAP HANA Hardware and Cloud Measurement Tool (HCMT)
The SAP HANA Hardware and Cloud Measurement Tool (HCMT) is the official tool used to validate that a hardware configuration (including Azure VMs) meets SAP HANA performance requirements. It runs performance tests and generates reports that can be submitted to SAP for certification. For Azure, this tool is essential to confirm that the chosen VM size and storage configuration can support the required HANA workload before production deployment.

Incorrect Option:

A. SAP Quick Sizer
SAP Quick Sizer is a tool for estimating hardware requirements based on user count and business processes. It provides sizing recommendations but does not validate actual performance against real hardware. While useful for initial planning, it does not ensure that a specific Azure VM meets HANA's runtime performance requirements.

B. Azure Advisor
Azure Advisor provides recommendations for cost, performance, reliability, and security in Azure. It can suggest VM resizing or storage optimizations but is not SAP HANA-specific. It does not perform the rigorous performance validation required for SAP HANA certification and cannot guarantee HANA workload compatibility.

C. ABAP Profiler
ABAP Profiler is a development tool used within SAP systems to analyze ABAP program performance. It focuses on application-level tuning and debugging, not on hardware validation. It is irrelevant for determining whether underlying infrastructure meets SAP HANA performance standards.

Reference:

SAP Note 1943937: "Hardware Configuration Check Tool for SAP HANA"

Microsoft Docs: "SAP HANA hardware directory"

SAP Help Portal: "SAP HANA Hardware and Cloud Measurement Tool"

You plan to deploy an SAP production landscape on Azure.

You need to estimate how many SAP operations will be processed by the landscape per hour. The solution must minimize administrative effort.

What should you use?

A. SAP Quick Sizer

B. SAP HANA hardware and cloud measurement tools

C. SAP S/4HANA Migration Cockpit

D. SAP GUI

A.   SAP Quick Sizer

Explanation:
Estimating the number of SAP operations processed per hour is a core part of capacity planning for an SAP landscape. The tool used should be designed specifically for sizing and workload estimation, not for performance testing, migration, or daily operations. Minimizing administrative effort means choosing a tool that provides reliable estimates without requiring complex configurations or actual workload simulation.

Correct Option:

A. SAP Quick Sizer
SAP Quick Sizer is the official SAP tool for estimating hardware requirements based on business processes and user counts. It calculates SAPS (SAP Application Performance Standard) requirements, which translate to expected operations per hour. This tool requires minimal effort—users input business parameters, and it generates sizing recommendations without needing existing systems or performance tests. It is ideal for pre-deployment planning.

Incorrect Option:

B. SAP HANA hardware and cloud measurement tools
SAP HANA hardware and cloud measurement tools are used to validate that existing hardware meets performance standards for HANA workloads. They require running actual performance tests on provisioned infrastructure, which is effort-intensive and occurs after deployment. These tools do not estimate operations per hour during the planning phase.

C. SAP S/4HANA Migration Cockpit
The SAP S/4HANA Migration Cockpit is a tool for migrating data from legacy systems to SAP S/4HANA. It handles data transfer, not workload estimation or capacity planning. Using it to estimate operations per hour would be irrelevant and administratively inefficient for the stated goal.

D. SAP GUI
SAP GUI is the front-end client for accessing SAP systems. It is used for daily transactions and administration, not for sizing or workload estimation. It provides no functionality for predicting operations per hour and would require extensive manual analysis if misused for this purpose.

Reference:

SAP Help Portal: "SAP Quick Sizer"

SAP Note 1634488: "SAP Quick Sizer – Frequently Asked Questions"

Microsoft Docs: "SAP workloads on Azure: Planning and deployment checklist"

Page 1 out of 22 Pages

Planning and Administering Microsoft Azure for SAP Workloads (beta) Practice Exam Questions