Free Microsoft AZ-204 Practice Test Questions MCQs

Stop wondering if you're ready. Our Microsoft AZ-204 practice test is designed to identify your exact knowledge gaps. Validate your skills with Developing Solutions for Microsoft Azure questions that mirror the real exam's format and difficulty. Build a personalized study plan based on your free AZ-204 exam questions mcqs performance, focusing your effort where it matters most.

Targeted practice like this helps candidates feel significantly more prepared for Developing Solutions for Microsoft Azure exam day.

22710+ already prepared
Updated On : 3-Mar-2026
271 Questions
Developing Solutions for Microsoft Azure
4.9/5.0

Page 1 out of 28 Pages

Topic 1: Windows Server 2016 virtual machine

   

Case study
This is a case study. Case studies are not timed separately. You can use as much
exam time as you would like to complete each case. However, there may be additional
case studies and sections on this exam. You must manage your time to ensure that you
are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the
left pane to explore the content of the case study before you answer the questions. Clicking
these buttons displays information such as business requirements, existing environment,
and problem statements. If the case study has an All Information tab, note that the
information displayed is identical to the information displayed on the subsequent tabs.
When you are ready to answer a question, click the Question button to return to the
question.
Current environment
Windows Server 2016 virtual machine
The virtual machine (VM) runs BizTalk Server 2016. The VM runs the following workflows:
Ocean Transport – This workflow gathers and validates container information
including container contents and arrival notices at various shipping ports.
Inland Transport – This workflow gathers and validates trucking information
including fuel usage, number of stops, and routes.
The VM supports the following REST API calls:
Container API – This API provides container information including weight,
contents, and other attributes.
Location API – This API provides location information regarding shipping ports of
call and tracking stops.
Shipping REST API – This API provides shipping information for use and display
on the shipping website.

Shipping Data
The application uses MongoDB JSON document storage database for all container and
transport information.
Shipping Web Site
The site displays shipping container tracking information and container contents. The site is
located at http://shipping.wideworldimporters.com/
Proposed solution
The on-premises shipping application must be moved to Azure. The VM has been migrated
to a new Standard_D16s_v3 Azure VM by using Azure Site Recovery and must remain
running in Azure to complete the BizTalk component migrations. You create a
Standard_D16s_v3 Azure VM to host BizTalk Server. The Azure architecture diagram for
the proposed solution is shown below:

Requirements
Shipping Logic app
The Shipping Logic app must meet the following requirements:
Support the ocean transport and inland transport workflows by using a Logic App.
Support industry-standard protocol X12 message format for various messages
including vessel content details and arrival notices.
Secure resources to the corporate VNet and use dedicated storage resources with
a fixed costing model.
Maintain on-premises connectivity to support legacy applications and final BizTalk
migrations.
Shipping Function app
Implement secure function endpoints by using app-level security and include Azure Active
Directory (Azure AD).
REST APIs

The REST API’s that support the solution must meet the following requirements:
Secure resources to the corporate VNet.
Allow deployment to a testing location within Azure while not incurring additional
costs.
Automatically scale to double capacity during peak shipping times while not
causing application downtime.
Minimize costs when selecting an Azure payment model.
Shipping data
Data migration from on-premises to Azure must minimize costs and downtime.
Shipping website
Use Azure Content Delivery Network (CDN) and ensure maximum performance for
dynamic content while minimizing latency and costs.
Issues
Windows Server 2016 VM
The VM shows high network latency, jitter, and high CPU utilization. The VM is critical and
has not been backed up in the past. The VM must enable a quick restore from a 7-day
snapshot to include in-place restore of disks in case of failure.
Shipping website and REST APIs
The following error message displays while you are testing the website:
Failed to load http://test-shippingapi.wideworldimporters.com/: No 'Access-Control-Allow-
Origin' header is present on the requested resource. Origin
'http://test.wideworldimporters.com/' is therefore not allowed access.

You are developing a web application that will use Azure Storage. Older data will be less frequently used than more recent data. You need to configure data storage for the application. You have the following
requirements:
Retain copies of data for five years.
Minimize costs associated with storing data that is over one year old.
Implement Zone Redundant Storage for application data.What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.




Explanation:
This question tests your knowledge of Azure Storage account types and lifecycle management policies. The requirements include 5-year data retention, cost minimization for data over 1 year old, and Zone Redundant Storage (ZRS) implementation. You need to select the correct storage account type and the appropriate lifecycle management configuration.

Correct Option:

Storage Account Type: Implement StorageV2 (general purpose v2)
General purpose v2 storage accounts are required for lifecycle management policies and support Zone Redundant Storage (ZRS). They provide the lowest access costs, support all storage tiers (hot, cool, archive), and are recommended by Microsoft for most scenarios. GPv1 accounts don't support lifecycle management or tiering, making them unsuitable for cost optimization requirements.

Lifecycle Management: Set a lifecycle management policy to move blobs to the archive tier
Moving blobs older than one year directly to the archive tier provides maximum cost savings. The archive tier offers the lowest storage cost at $0.00099/GB/month compared to cool tier at $0.01/GB/month. This meets both requirements: cost minimization and 5-year retention. Archive tier is ideal for long-term retention of infrequently accessed data.

Incorrect Option:

Storage Account Type: Implement Storage (general purpose v1)
GPv1 accounts don't support ZRS, lifecycle management policies, or blob tiering. They cannot meet the Zone Redundant Storage requirement and provide no mechanism for cost-optimized tiering. These accounts are legacy and primarily used for classic deployments or specific compatibility scenarios.

Storage Account Type: Implement Azure Cosmos DB/Blob Storage
These are not storage account types. Azure Cosmos DB is a NoSQL database service, and Blob Storage is a feature within storage accounts, not an account type itself. The question specifically asks for storage account configuration.

Lifecycle Management: Snapshot blobs and move them to the archive tier
While snapshots provide point-in-time copies, they incur additional storage costs and require manual management. Lifecycle policies are automated and more cost-effective. Microsoft recommends lifecycle policies over manual snapshot management for long-term retention scenarios.

Lifecycle Management: Set a lifecycle management policy to move blobs to the cool tier
Cool tier is designed for data stored for at least 30 days, but it's more expensive than archive tier ($0.01/GB vs $0.00099/GB). For data over one year old, archive tier provides 90% cost reduction compared to cool tier, better meeting the "minimize costs" requirement.

Lifecycle Management: Use AzCopy to copy data to on-premises device
AzCopy is a data transfer tool, not a retention/cost optimization solution. This would move data out of Azure, incurring egress charges and not meeting the requirement to retain copies in Azure for five years.

Reference:
Azure Storage account overview

You develop a REST API. You implement a user delegation SAS token to communicate
with Azure Blob
storage.
The token is compromised.
You need to revoke the token.
What are two possible ways to achieve this goal? Each correct answer presents a
complete solution.
NOTE: Each correct selection is worth one point.

A. Revoke the delegation keys

B. Delete the stored access policy.

C. Regenerate the account key.

D. Remove the role assignment for the security principle.

A.   Revoke the delegation keys
B.   Delete the stored access policy.

Explanation:
This question tests your understanding of User Delegation SAS tokens and how to revoke them in Azure Blob Storage. User Delegation SAS tokens are secured with Azure AD credentials and do not rely on storage account keys. Understanding the revocation mechanisms is critical for security incident response in Azure Storage scenarios.

Correct Option:

A. Revoke the delegation keys
User Delegation SAS tokens are signed using user delegation keys. These keys are temporary credentials obtained from Azure AD. Revoking the delegation keys immediately invalidates all SAS tokens generated using those keys. This is a direct and effective method to revoke compromised User Delegation SAS tokens.

B. Delete the stored access policy
While stored access policies are primarily associated with Service SAS, User Delegation SAS can also be linked to them. If the User Delegation SAS was created with a stored access policy identifier, deleting that policy immediately revokes all SAS tokens associated with it. This provides centralized revocation management.

Incorrect Option:

C. Regenerate the account key
User Delegation SAS tokens do not use account keys for signing. They are signed with Azure AD user delegation keys. Regenerating account keys would affect Service SAS and account SAS but has no impact on User Delegation SAS tokens. This action is unnecessary and ineffective for User Delegation SAS revocation.

D. Remove the role assignment for the security principal
Removing role assignments prevents future authorization but does not revoke already issued User Delegation SAS tokens. The tokens remain valid until their expiry or until the user delegation key is revoked. This addresses future access but not the immediate threat of a compromised token.

Reference:
Create a user delegation SAS

You are developing an ASP.NET Core app that includes feature flags which are managed by Azure App Configuration. You create an Azure App Configuration store named AppreaiureflagStore as shown in the exhibit:




Explanation:
This question tests your knowledge of implementing feature flags in ASP.NET Core using Azure App Configuration. The exhibit shows a feature flag named Export with a label Off. The markup requires the feature to be conditionally rendered using . You must select the correct controller attribute and the correct App Configuration endpoint to enable feature management.

Correct Option:

Controller attribute: FeatureGate
FeatureGate is the correct attribute to filter controller actions or views based on feature flag status. It evaluates whether the feature is enabled and allows or blocks access accordingly. This aligns with the requirement to use the feature in the app via the provided markup.

Startup method: AddAzureAppConfiguration
AddAzureAppConfiguration is the method used in Startup.ConfigureServices to load configuration and feature flags from Azure App Configuration. It is required to enable dynamic feature flag evaluation and refresh in ASP.NET Core.

AppConfig endpoint setting: https://appfeatureflagstore.azconfig.io
This is the App Configuration endpoint for the store named AppreaiureflagStore. It follows the standard pattern of https://.azconfig.io. This is where the application connects to retrieve feature flags and configuration data.

Incorrect Option:

Controller attribute: Route
Route is used for URL routing and endpoint mapping, not for feature flag evaluation. It does not provide any conditional logic based on feature flags.

Controller attribute: ServiceFilter / TypeFilter
These attributes are used to apply filters to controllers or actions, such as authorization or action filters. They do not evaluate feature flags and are not designed for feature flag–based conditional rendering.

Startup method: AddControllersWithViews
This method adds MVC services to the container but does not load Azure App Configuration or feature flags. It is not related to feature flag integration.

Startup method: AddUserSecrets
This is used to load secrets during development from the local secrets store. It is not relevant for loading feature flags from Azure App Configuration.

AppConfig endpoint setting: https://appfeatureflagstore.vault.azure.net
This is a Key Vault endpoint, not an App Configuration endpoint. It is used for accessing secrets, not feature flags.

AppConfig endpoint setting: https://export.azconfig.io / https://exportvault.azure.net
These do not match the naming convention of the actual store (AppreaiureflagStore). export is the feature flag name, not the store name. Using these endpoints would fail to connect to the correct App Configuration resource.

Referene:
Use feature flags in ASP.NET Core

FeatureGate attribute

Connect to App Configuration

You are building a website that is used to review restaurants. The website will use an
Azure CDN to improve performance and add functionality to requests.
You build and deploy a mobile app for Apple iPhones. Whenever a user accesses the
website from an iPhone, the user must be redirected to the app store.
You need to implement an Azure CDN rule that ensures that iPhone users are redirected to
the app store.
How should you complete the Azure Resource Manager template? To answer, select the
appropriate options in the answer area.
NOTE: Each correct selection is worth one point




Explanation:
This question tests your knowledge of Azure CDN rules engine and device detection. The requirement is to identify iPhone users and redirect them to the app store. Azure CDN Premium from Verizon provides device detection capabilities through the IsDevice condition. The correct condition type and parameters must be selected to match iPhone traffic based on the user agent.

Correct Option:

Condition name: DeliveryRuleIsDeviceConditionParameters
The IsDevice condition is specifically designed for device detection in Azure CDN. It identifies requests from mobile devices, tablets, desktops, or specific device types like iPhone. This is the only condition that directly supports detecting iPhone users without custom header parsing.

Condition parameters: DeliveryRulesDeviceConditionParameters
This matches the exhibit where the @odata.type shows #Microsoft.Azure.Cdn.Models.DeliveryRulesDeviceConditionParameters. This parameter type is required for the IsDevice condition to evaluate device type properties. It supports operators like "Equal" and match values such as "iOS", "Mobile", "iPhone", etc.

Incorrect Option:

Condition name: DeliveryRuleRequestHeaderConditionParameters
This condition evaluates HTTP request headers but requires manual configuration of specific header names and values. While you could theoretically check HTTP_USER_AGENT, this approach is less reliable and more complex than using the built-in IsDevice condition. It also doesn't match the exhibit which shows IsDevice as the condition name.

Condition parameters: DeliveryRuleCookiesConditionParameters / DeliveryRulePostArgsConditionParameters
Cookie and POST arguments conditions are unrelated to device detection. They evaluate cookie values and POST request body parameters respectively. These cannot identify iPhone users and are completely irrelevant for this redirect scenario.

Reference:
Azure CDN rules engine device detection

Azure CDN rules engine conditions

Set device detection for Azure CDN

You provision virtual machines (VMs) as development environments.
One VM does not have host The VM is stuck in a Windows update process. You attach the OS disk for the affected VM to a recovery VM. You need to correct the issue. In which order should you perform the actions' To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.




Explanation:
This question tests your knowledge of recovering Azure VMs stuck in Windows Update pending state. When a VM fails to boot due to pending updates, you must attach the OS disk to a recovery VM, identify the problematic update, remove it, and then reattach the disk. The correct sequence follows Azure VM recovery best practices for Windows Update failures.

Correct Order:

1. Detach the OS disk and recreate the VM.
This is the initial recovery step. First, delete the original VM but retain its OS disk. This stops billing and allows the disk to be attached to a recovery VM for offline repair. The OS disk remains intact in the storage account.

2. Run the following command at an elevated command prompt:

cd /s /d "C:\temp\patch.txt" /get-packages > C:\temp\Patch.txt

After attaching the OS disk to the recovery VM, mount it as a data disk. Navigate to the mounted Windows directory and export the list of installed packages to identify the pending update. This generates an inventory of updates needing removal.

3. Open C:\temp\Patch.txt file and locate the update that is in a pending state. Review the exported package list to find the specific update stuck in pending state. Look for packages with status indicating pending installation or removal. This identifies the exact package name needed for removal.

4. Run the following command at an elevated command prompt:
cd /s /d "C:\temp\patch.txt" //Remove-Package /PackageName=...

With the problematic update identified, execute the package removal command. This removes the pending update from the offline Windows installation. After completion, detach the disk from recovery VM and recreate the original VM.

Incorrect Option Placement:
The commands cannot run before attaching the OS disk to the recovery VM

Package list must be generated before identifying the specific update

Removal command requires the exact package name from the analysis phase

Reference:
Troubleshoot Azure VM startup issues

Remove Windows updates offline

Attach OS disk to recovery VM

You are creating an app that will use CosmosDB for data storage. The app will process batches of relational data. You need to select an API for the app.
Which API should you use?

A. MongoDBAPI

B. Table API

C. CSQL API

D. Cassandra API

C.   CSQL API

Explanation:
This question tests your knowledge of Azure Cosmos DB APIs and their data modeling capabilities. The requirement specifies that the app will process batches of relational data. While Cosmos DB is a NoSQL database, the Core (SQL) API provides the most flexible querying capabilities including JOINs, subqueries, and complex filtering that can accommodate relational-like data structures within JSON documents.

Correct Option:

C. Core (SQL) API
The Core (SQL) API is the native API for Azure Cosmos DB. It supports querying JSON documents using SQL syntax, including JOIN operations, subqueries, and complex projections. This makes it the best choice when working with relational data patterns that need to be modeled in a NoSQL database. It also provides the richest query functionality, SDK support, and integration with Azure services.

Incorrect Option:

A. MongoDB API
MongoDB API provides wire protocol compatibility with MongoDB but is designed for document storage with MongoDB-style queries. While it supports embedded documents and arrays, it does not provide native SQL-like JOIN capabilities for relational data patterns and is better suited for existing MongoDB workloads.

B. Table API
Table API is designed for key-attribute-store scenarios with simple lookup patterns. It supports only point reads and limited range queries. It does not support JOINs, complex queries, or relational data modeling. This API is intended for migrating Azure Table Storage workloads, not for processing relational data batches.

D. Cassandra API
Cassandra API is a wide-column store designed for high-scale, high-availability scenarios with denormalized data models. It supports CQL (Cassandra Query Language) but does not support JOINs or complex relational queries. It is optimized for write-heavy workloads with simple partition key-based access patterns.

Reference:

Azure Cosmos DB Core (SQL) API

Choose an API in Azure Cosmos DB

SQL JOIN queries in Cosmos DB

You deploy an Azure App Service web app. You create an app registration for the app in Azure Active Directory (Azure AD) and Twitter. the app must authenticate users and must use SSL for all communications. The app must use Twitter as the identity provider. You need to validate the Azure AD request in the app code. What should you validate?

A. HTTP response code

B. ID token header

C. ID token signature

D. Tenant ID

B.   ID token header

Explanation:
This question tests your understanding of authentication flow in Azure App Service when using multiple identity providers. The app uses Twitter as the identity provider but also has an Azure AD app registration. When Azure App Service handles authentication, it injects user claims into the X-MS-CLIENT-PRINCIPAL header. The question asks what to validate in the app code specifically for the Azure AD request.

Correct Option:

B. ID token header
When using App Service Authentication, Azure AD tokens are passed in the X-MS-TOKEN-AAD-ID-TOKEN header. The application code should validate this ID token header to verify the authenticity of the Azure AD request. The ID token contains claims about the authenticated user and is issued by Azure AD. Validating the token header ensures the request originates from a legitimate Azure AD authentication flow through App Service.

Incorrect Option:

A. HTTP response code
HTTP response codes indicate the status of the HTTP request itself (200, 401, 403, etc.) but do not provide any authentication or identity validation. Response codes cannot verify the authenticity of a user or token and are not used to validate Azure AD requests in application code.

C. ID token signature
While signature validation is important for token verification in general, when using App Service Authentication, the platform already validates the token signature before forwarding requests to your application. The app code receives the already-validated token header and does not need to perform cryptographic signature validation. This is handled by the App Service authentication layer.

D. Tenant ID
Tenant ID identifies which Azure AD directory issued the token. While this can be checked as an additional authorization constraint, it is not the primary validation required for the Azure AD request itself. The question asks what to validate in the app code for the request, and the immediate validation point is the presence and content of the ID token header.

Reference:
Work with user identities in Azure App Service authentication

Access user claims in App Service

Azure App Service authentication and authorization

You are maintaining an existing application that uses an Azure Blob GPv1 Premium
storage account. Data older than three months is rarely used.
Data newer than three months must be available immediately. Data older than a year must
be saved but does not need to be available immediately.
You need to configure the account to support a lifecycle management rule that moves blob
data to archive storage for data not modified in the last year.
Which three actions should you perform in sequence? To answer, move the appropriate
actions from the list of actions to the answer area and arrange them in the correct order.




Explanation:
This question tests your knowledge of Azure Storage account types and lifecycle management requirements. GPv1 Premium storage accounts do not support lifecycle management policies or blob tiering. To implement a rule that moves blobs to archive tier after one year, you must first migrate to a GPv2 account with the appropriate tier support. The correct sequence ensures both compatibility and cost optimization.

Correct Order:

1. Create a new GPv2 Standard account and set its default access tier level to cool
GPv2 Standard accounts support lifecycle management and all access tiers (hot, cool, archive). Setting the default tier to cool is appropriate for data older than three months that is rarely used but must be available immediately when accessed. This provides cost savings while maintaining low-latency access.

2. Copy the data to be archived to a Standard GPv2 storage account and then delete the data from the original storage account
Since you cannot convert a Premium GPv1 account directly to GPv2 with tiering support and data migration without downtime or manual effort is required, you must copy the existing blobs to the new GPv2 account. After successful copy and validation, delete the data from the original GPv1 Premium account to avoid duplicate costs.

3. Set a lifecycle management policy to move blobs to the archive tier
Once data resides in the GPv2 Standard account, you can configure a lifecycle management rule targeting blobs not modified in the last year. The rule transitions these blobs from cool (or hot) to archive tier, meeting the requirement to save data older than a year without immediate availability needs, while minimizing storage costs.

Incorrect Action Placement:
Upgrade the storage account to GPv2 – GPv1 Premium accounts cannot be upgraded directly to GPv2. You must create a new GPv2 account and migrate data. Change the storage account access tier from hot to cool – This action is not possible in GPv1 Premium accounts as they do not support access tiers. This would fail if attempted.

Reference:
Azure Storage account overview

Lifecycle management policy support

Migrate to GPv2 storage account

You are developing a web application that runs as an Azure Web App. The web application
stores data in Azure SQL Database and stores files in an Azure Storage account. The web
application makes HTTP requests to external services as part of normal operations.
The web application is instrumented with Application Insights. The external services are
OpenTelemetry compliant.
You need to ensure that the customer ID of the signed in user is associated with all
operations throughout the overall system.
What should you do?

A. Create a new SpanContext with the TraceRags value set to the customer ID for the signed in user.

B. On the current SpanContext, set the Traceld to the customer ID for the signed in user.

C. Add the customer ID for the signed in user to the CorrelationContext in the web application.

D. Set the header Ocp-Apim-Trace to the customer ID for the signed in user.

C.   Add the customer ID for the signed in user to the CorrelationContext in the web application.

Explanation:
This question tests your knowledge of distributed tracing and correlation in Application Insights with OpenTelemetry-compliant services. The requirement is to associate the customer ID of the signed-in user with all operations throughout the overall system — including Azure SQL, Blob Storage, and external HTTP services. This requires propagating user context across service boundaries using correlation context, not modifying trace IDs or span context directly.

Correct Option:

C. Add the customer ID for the signed in user to the CorrelationContext in the web application.
CorrelationContext (or Baggage in OpenTelemetry) is designed to propagate key-value pairs across service boundaries. Adding the customer ID to CorrelationContext ensures it is attached to all outgoing requests, logs, dependencies, and traces throughout the distributed system. This allows end-to-end correlation of user activity across Azure SQL, Blob Storage, and external OpenTelemetry-compliant services without breaking trace integrity.

Incorrect Option:

A. Create a new SpanContext with the TraceFlags value set to the customer ID for the signed in user.
TraceFlags is a byte field used for sampling decisions (e.g., recorded vs. not recorded), not for storing business context like customer IDs. Setting it to a customer ID is invalid and would break trace sampling behavior. SpanContext is also not intended for propagating user identifiers.

B. On the current SpanContext, set the TraceId to the customer ID for the signed in user.
TraceId is a globally unique 128-bit identifier for a trace, generated by the system. Manually setting it to a customer ID violates the OpenTelemetry specification, breaks trace hierarchy, and prevents proper distributed tracing. TraceId must remain system-generated and unique per trace.

D. Set the header Ocp-Apim-Trace to the customer ID for the signed in user.
Ocp-Apim-Trace is an Azure API Management header used for debugging and tracing API requests within APIM only. It is not recognized by Application Insights, Azure SQL, Blob Storage, or OpenTelemetry SDKs, and does not propagate user context across the overall system.

Reference:
Custom operations and correlation in Application Insights
OpenTelemetry Baggage

Correlation headers for distributed tracing

Your company is designing an application named App1 that will use data from Azure SQL Database. App1 will be accessed over the internet by many users.
You need to recommend a solution for improving the performance ofApp1.
What should you include in the recommendation?

A. Azure HPC cache

B. ExpressRoute

C. a CON profile

D. Azure Cache for Redis

D.   Azure Cache for Redis

Explanation:
This question tests your knowledge of Azure performance optimization techniques for internet-facing applications that rely on Azure SQL Database. The requirement is to improve the performance of an application accessed by many users. Azure Cache for Redis is a high-performance, low-latency caching solution that reduces database load and improves response times by storing frequently accessed data in memory.

Correct Option:

D. Azure Cache for Redis
Azure Cache for Redis provides in-memory data caching, significantly reducing latency and database read/write operations. For an application with many concurrent users accessing Azure SQL Database, implementing Redis cache for frequently queried data (such as lookups, session state, or read-heavy workloads) offloads database pressure and improves overall application responsiveness. It supports both caching and session state provider patterns.

Incorrect Option:

A. Azure HPC cache
Azure HPC Cache is designed for high-performance computing workloads, specifically to accelerate file access for Linux-based HPC applications. It is not intended for general web application scenarios or Azure SQL Database performance improvement. This is an over-engineered and incorrect solution for App1.

B. ExpressRoute
ExpressRoute provides private, dedicated network connectivity between on-premises infrastructure and Azure. Since App1 is accessed over the internet by many users, ExpressRoute does not improve performance for end-users connecting from public networks. It is also not a caching or query optimization solution.

C. a CON profile
This appears to be a typographical error in the question. If referring to CDN profile (Content Delivery Network), CDN is used for caching static content like images, CSS, and JavaScript files at edge locations. It does not cache dynamic database query results or improve Azure SQL Database performance. If referring to CNAME, it is a DNS record type, not a performance solution.

Reference:
Azure Cache for Redis

Caching guidance for Azure SQL Database

Page 1 out of 28 Pages

Developing Solutions for Microsoft Azure Practice Exam Questions