Topic 3: Misc. Questions
You need to use an Azure Sentinel analytics rule to search for specific criteria in Amazon
Web Services (AWS) logs and to generate incidents.
Which three actions should you perform in sequence? To answer, move the appropriate
actions from the list of actions to the answer area and arrange them in the correct order.

Explanation:
To create an analytics rule in Azure Sentinel that queries AWS logs, you must first ingest those logs. The process logically begins by adding the AWS data connector, then building the custom rule to query that data, and finally configuring the rule's output to create incidents.
Correct Option:
The correct sequence of actions is:
Add the Amazon Web Services connector.
This is the foundational step. You must first connect your AWS environment to Azure Sentinel to start ingesting AWS CloudTrail and other logs into the Log Analytics workspace. Without this data, no analytics rule can query it.
From Analytics in Azure Sentinel, create a custom analytics rule that uses a scheduled query.
Since the requirement is to search for specific criteria in AWS logs, you need a custom rule, not a built-in template. Selecting "Scheduled query" is the standard method for creating a custom KQL query that runs periodically to find the defined criteria.
Set the alert logic. After creating the rule, you configure its core parameters:
the KQL query (to search for the specific criteria in the AWS tables), the query schedule, and, most importantly, the Incident creation settings to ensure the query results generate Azure Sentinel incidents.
Incorrect Option:
Create a rule by using the Changes to Amazon VPC settings rule template:
While this is a specific AWS-related template, the requirement is to search for specific criteria, which implies a custom search. Using a pre-built template may not match the unspecified criteria.
From Analytics in Azure Sentinel, create a Microsoft incident creation rule:
This type of rule is for ingesting alerts directly from Microsoft security products (like Defender for Cloud) and converting them into Sentinel incidents. It does not involve writing a KQL query against AWS logs.
Select a Microsoft security service:
This is irrelevant for analyzing AWS logs.
Add the Syslog connector:
The Syslog connector is for ingesting logs from network appliances or Linux servers, not for AWS platform logs like CloudTrail. AWS logs are ingested via the dedicated Amazon Web Services connector.
Reference:
Microsoft Learn, "Connect AWS CloudTrail to Azure Sentinel" and "Create custom analytics rules to detect threats". The documented workflow is to first set up the data connector, then create a scheduled query rule in the Analytics blade, where you define the query logic and incident creation settings.
You have the resources shown in the following table.

Explanation:
This question addresses the problem of event duplication in a CEF (Common Event Format) and Syslog hybrid environment. CEF messages are essentially Syslog messages with a specific structure. If both the CEF forwarder and Syslog forwarder are sending the same data to the Log Analytics workspace, or if the Log Analytics agent is configured to collect both standard Syslog and CEF, duplication occurs.
Correct Option:
From the Syslog configuration, remove the facilities that send CEF messages. → CEF1
CEF1 is the Linux server acting as the CEF forwarder to Azure Sentinel. It runs the Log Analytics agent and receives CEF messages from Server1. The Log Analytics agent on CEF1 typically listens for Syslog on port 514 and forwards CEF messages as they arrive. To prevent duplication, you should ensure that the standard Syslog collection (via the Syslog data connector settings in the workspace) does not also collect the same facilities that CEF1 is sending. This configuration change is made on the Log Analytics workspace (SW1) Syslog configuration, not on the servers themselves. However, looking at the options, the action "From the Syslog configuration, remove the facilities that send CEF messages" refers to the workspace-level Syslog settings. The resource to modify this setting is SW1. Wait — this requires careful analysis.
2. From the Log Analytics agent, disable Syslog synchronization. → CEF1
- This refers to the Log Analytics agent configuration on the CEF forwarder (CEF1). The Log Analytics agent has a feature where it synchronizes its Syslog configuration with Azure. Disabling this prevents the agent from overriding its local settings and helps stop the agent from collecting Syslog messages that it should not be collecting, which could otherwise cause duplication with the CEF ingestion path.
Correct Option (Revised):
From the Syslog configuration, remove the facilities that send CEF messages. → SW1
In the Azure Sentinel workspace (SW1), when you configure the Syslog data connector, you specify which facilities and severities to collect. If CEF1 is already forwarding CEF logs (which use facilities like "local4") and you also have the Syslog connector configured to collect the same facilities, duplicate events will appear in SW1. Removing those facilities from the Syslog connector configuration eliminates one source of duplication.
From the Log Analytics agent, disable Syslog synchronization. → CEF1
The Log Analytics agent on CEF1 can synchronize its Syslog configuration with Azure. If this is enabled, the agent may start collecting Syslog messages directly (including from Server2 and possibly duplicate CEF messages if misconfigured) and forward them as standard Syslog, causing duplicates. Disabling this synchronization on the CEF forwarder helps prevent the agent from collecting and sending logs that are already being forwarded via the CEF pipeline.
Incorrect Option:
Server1 / Server2:
These are source servers sending logs. The actions described are configuration changes to either the workspace (SW1) Syslog data connector settings or the Log Analytics agent settings on the forwarder (CEF1). Server1 and Server2 are not directly configured to fix the duplication issue at the aggregation point.
(For Action 1) CEF1:
While you can configure rsyslog or syslog-ng on CEF1, the question's phrase "From the Syslog configuration" in the context of Azure Sentinel typically means the workspace data connector settings in the portal, not the local syslog daemon configuration on CEF1.
(For Action 2) SW1:
Disabling Syslog synchronization is a setting on the Log Analytics agent itself (on the VM), not a workspace-level setting.
Reference:
Microsoft Learn, "CEF and Syslog data duplication prevention" and "Log Analytics agent configuration". Documentation recommends:
In the Syslog data connector (workspace level), do not collect facilities that are being used to send CEF data.
On CEF forwarder machines, disable Syslog collection in the agent to avoid duplicate ingestion.
You have the following SQL query.

Explanation:
This is a duplicate question from a previous interaction. It tests KQL query interpretation in Azure Sentinel, focusing on entity mapping and watchlist capabilities. The query uses a watchlist named 'Bad_IPs', parses Sysmon network events, and compares IPs against that list.
Correct Option:
The UserName field is set as the account entity: Yes.
The query explicitly maps the UserName field to the built-in entity field AccountCustomEntity using the syntax AccountCustomEntity = UserName. This tells Azure Sentinel to treat the value in the UserName column as the Account entity for incident enrichment.
The watchlist cannot be updated after it is created: No.
This statement is false. Watchlists in Azure Sentinel are fully manageable after creation. You can add, modify, or delete watchlist items through the Azure portal, API, or PowerShell. Queries referencing the watchlist dynamically reflect the updated data.
The IPList variable is set as the IP address entity: No.
This statement is false. The IPList variable is populated with data from a watchlist using the _GetWatchlist() function. It is used as a lookup table for IP comparison in the in operator. It is not mapped to a Sentinel entity. Entity mapping requires using specific column identifiers like IPCustomEntity = SourceIP, which is not present in this query for the variable itself.
Incorrect Option:
(For statement 1) No:
Incorrect because AccountCustomEntity = UserName is a direct and explicit entity mapping assignment.
(For statement 2) Yes:
Incorrect because watchlists are designed to be updatable resources; this contradicts documented functionality.
(For statement 3) Yes:
Incorrect. While the query uses IP addresses, the IPList variable contains the watchlist data, not an entity mapping. Sentinel entities are designated using *CustomEntity fields in the output projection.
Reference:
Microsoft Learn, "Map data fields to entities in Azure Sentinel" and "Use watchlists in Azure Sentinel." Documentation confirms entity mapping uses *CustomEntity suffixes and watchlists support CRUD operations post-creation.
You have an Azure subscription.
You need to delegate permissions to meet the following requirements:
Enable and disable Azure Defender.
Apply security recommendations to resource.
The solution must use the principle of least privilege.
Which Azure Security Center role should you use for each requirement? To answer, drag
the appropriate roles to the correct requirements. Each role may be used once, more than
once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Explanation:
This question requires selecting the appropriate Azure role for specific Microsoft Defender for Cloud tasks while adhering to the principle of least privilege. The key distinction is that enabling/disabling Azure Defender (formerly Azure Security Center's Defender plans) is a subscription-level security configuration, while applying security recommendations to a resource can be performed with more granular permissions.
Correct Option:
Enable and disable Azure Defender: Security Admin
The Security Admin role (Azure AD built-in role) or Security Administrator at the subscription scope provides the exact permissions needed to view and update security policies, including turning Defender plans on or off. This role does not grant full subscription management rights, making it the least-privilege choice for this task.
Apply security recommendations to a resource: Resource Group Owner
The Resource Group Owner role grants full permissions to manage all resources within a specific resource group. This includes the ability to apply security recommendations (e.g., enabling encryption, configuring NSGs) on resources within that scope. It is more restrictive than subscription-level roles and follows least privilege by limiting the user's permissions to only the necessary resource group(s).
Incorrect Option:
Subscription Contributor:
While a Subscription Contributor can enable/disable Azure Defender and apply recommendations, this role grants broad permissions to manage all Azure resources in the subscription. This exceeds the principle of least privilege when compared to the more specific Security Admin role for Defender configuration.
Subscription Owner:
This role has full administrative access, including the ability to assign roles. It is excessive privilege for both tasks.
(For enabling/disabling Defender) Resource Group Owner:
This role does not grant permissions to modify subscription-level security policies. Azure Defender settings are configured at the subscription level, not at the resource group scope.
(For applying recommendations) Security Admin:
While a Security Admin can apply recommendations, this role includes broader security policy permissions that are unnecessary if the user only needs to remediate resources within a specific resource group. The Resource Group Owner provides sufficient permissions with a narrower scope.
Reference:
Microsoft Learn, "Roles and permissions in Microsoft Defender for Cloud." The documentation specifies that turning Defender plans on/off requires Subscription Contributor, Owner, or Security Admin permissions. For remediating recommendations, Resource Group Owner/Contributor or Security Admin are valid, but Resource Group Owner follows least privilege when scope is limited to specific resources.
You open the Cloud App Security portal as shown in the following exhibit.

Explanation:
The goal is to remediate risk for an unsanctioned application (Launchpad) discovered through Microsoft Defender for Cloud Apps (formerly Microsoft Cloud App Security). The standard workflow to block an unsanctioned app involves identifying the app, marking it as unsanctioned, generating a block script, and deploying that script to your network appliances (e.g., Zscaler, Blue Coat, or on-premises proxy servers).
Correct Option:
The correct sequence of actions is:
Select the app.
From the Cloud Discovery dashboard, you must first select the specific application (Launchpad) you want to remediate. This is the starting point for any app-specific action.
Tag the app as Unsanctioned.
Tagging the app as Unsanctioned is the remediation action. This tells Defender for Cloud Apps that this app should be blocked. It also updates the Cloud Discovery dashboard to reflect the unsanctioned status and can trigger automatic block scripts.
Generate a block script.
After tagging the app as unsanctioned, you generate a blocking script. Defender for Cloud Apps can create customized block scripts compatible with your specific security appliances (e.g., Palo Alto, Cisco, Fortinet) to enforce the block at the network level.
Run the script on the source appliance.
The final step is to deploy and execute the generated script on your network proxy or firewall appliance. This physically enforces the block, preventing users from accessing the unsanctioned app.
Incorrect Option:
Tag the app as Sanctioned: This would mark the app as approved and safe, which is the opposite of remediation. Sanctioned tagging is for apps you want to allow and potentially integrate with.
Run the script in Azure Cloud Shell: The block script is intended to run on your network appliance (on-premises firewall/proxy), not in Azure Cloud Shell. The script format is specific to the appliance vendor, not an Azure environment.
(Any sequence that does not start with selecting the app): You cannot tag or generate scripts for an app without first selecting it from the discovered apps list.
(Any sequence that tags unsanctioned after generating the script): The tagging action must precede script generation because the script is generated based on the unsanctioned tag.
Reference:
Microsoft Learn, "Govern discovered apps in Microsoft Defender for Cloud Apps." The documented workflow is: 1. Select the app. 2. Tag as unsanctioned. 3. Generate block script. 4. Deploy script to appliance.
From Azure Sentinel, you open the Investigation pane for a high-severity incident as shown
in the following exhibit.

Explanation:
This question tests familiarity with the Azure Sentinel Investigation graph interface. The investigation graph visually maps entities (users, hosts, IPs, processes) related to an incident. Hovering over an entity provides specific quick information, and there are specific UI controls to access additional investigation artifacts like bookmarks.
Correct Option:
If you hover over the virtual machine named vm1, you can view [the open ports on the host].
In the Azure Sentinel investigation graph, hovering over a device/host entity displays a context panel that includes key information about that asset. This includes open ports, IP addresses, installed software (depending on data availability), and any alerts directly associated with the entity. It does not display full log event details or NSG rules.
If you select [the incident ID], you can navigate to the bookmarks related to the incident.
Within an incident details page or investigation pane, selecting or clicking the incident ID (often displayed prominently at the top of the incident/alert page) provides navigation back to the full incident management view, which contains associated entities, alerts, and bookmarks created during hunting or investigation. Alternatively, the question may refer to the "Incident details" or "Full details" link. However, based on standard Sentinel UI: clicking the incident ID (or "View full details") navigates to the incident page containing bookmarks. In the context of the investigation pane, there is also a "Bookmarks" button or tab. But between the given drop-down options, the one that correctly navigates to bookmarks is selecting the incident ID to go to the incident page where bookmarks are listed.
Incorrect Option:
the inbound network security group (NSG) rules:
This information is not displayed when hovering over a VM in the investigation graph. NSG rules are Azure networking configurations viewed in the Azure portal, not live entity context in Sentinel.
the last five Windows security log events:
Hovering does not pull recent log events; that level of detail requires advanced hunting queries or the entity's timeline page.
the running processes:
While process entities may appear in the graph if Sysmon or EDR data is present, hovering over the VM entity itself typically shows network and system properties, not a live list of running processes.
(For second drop-down) the vm1 node / a user node / an IP address:
Selecting graph entities expands their details or shows connections but does not directly navigate to the incident bookmarks. Bookmarks are associated with the incident, accessed via the incident page.
Reference:
Microsoft Learn, "Investigate entities with entities in Azure Sentinel." Documentation describes the investigation graph's hover functionality displaying entity information like open ports, IP addresses, and related alerts. Incident bookmarks are accessed from the incident's full details page.
You purchase a Microsoft 365 subscription.
You plan to configure Microsoft Cloud App Security.
You need to create a custom template-based policy that detects connections to Microsoft
365 apps that originate from a botnet network.
What should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Explanation:
This question requires identifying the correct policy template type and filter criteria in Microsoft Defender for Cloud Apps to detect connections from botnet networks. Botnet connections are detected via threat intelligence feeds and require a policy that monitors activities for known malicious IP addresses.
Correct Option:
Policy template type:
Anomaly detection policy
Anomaly detection policies in Defender for Cloud Apps are designed to identify unusual or suspicious activities based on predefined machine learning models and threat intelligence. The Activity from botnet policy is a built-in anomaly detection policy that alerts when connections originate from IP addresses associated with botnets or Tor networks.
Filter based on:
IP address tag
Within the anomaly detection policy configuration, the condition to detect botnet activity relies on the IP address tag filter. Defender for Cloud Apps tags IP addresses with categories like "Botnet," "Anonymous proxy," or "Tor." Selecting the appropriate IP tag (e.g., "Botnet") allows the policy to specifically identify connections from botnet-associated IP addresses.
Incorrect Option:
Access policy:
Access policies are used to enforce real-time session controls and conditional access based on user, device, or location. They are not template-based for detecting botnet activity; they are created for proactive access restrictions.
Activity policy:
While activity policies can monitor specific events and apply filters, the preconfigured template for botnet detection exists under Anomaly detection. Activity policies are more commonly used for custom monitoring of specific administrative activities or unusual file shares.
Source:
Filtering by source alone (e.g., IP or app) is too broad. The specific filter needed is the IP address tag to target the botnet category.
User agent string:
This filter identifies browser/client types, not botnet connections. Botnet detection is based on IP reputation, not user agent strings.
Reference:
Microsoft Learn, "Anomaly detection policies in Microsoft Defender for Cloud Apps." The documentation specifies that the "Activity from botnet" policy is an anomaly detection policy that uses IP address tags to identify malicious network sources.
You have an Azure subscription that contains an Microsoft Sentinel workspace.
You need to create a hunting query using Kusto Query Language (KQL) that meets the
following requirements:
• Identifies an anomalous number of changes to the rules of a network security group
(NSG) made by the same security principal
• Automatically associates the security principal with an Microsoft Sentinel entity
How should you complete the query? To answer, select the appropriate options in the
answer area. NOTE: Each correct selection is worth one point.

Explanation:
This question requires building a KQL hunting query to detect anomalous NSG rule changes by the same security principal. The requirements include: using the correct data source, filtering for successful write operations on NSG rules, summarizing/aggregating to find anomalies, and mapping the security principal to a Sentinel entity.
Correct Option:
First dropdown: AzureActivity
AzureActivity is the correct table for Azure resource management plane logs, including operations on NSG rules
(Microsoft.Network/networkSecurityGroups/securityRules/write). AuditLogs is for Azure AD, not resource operations. AzureDiagnostics is for resource-specific diagnostic logs, not control plane activities.
Second dropdown: where
The where operator filters the table to rows where the operation name matches NSG rule writes and the status is "Succeeded". The syntax shown uses in^ (case-insensitive in) which is valid, and e == "Succeeded" (likely representing OperationStatusValue or ActivityStatusValue).
Third dropdown: parse-where
The requirement is to identify an anomalous number of changes. This implies aggregating counts over time and looking for outliers. However, the query snippet shown includes make-series and dcount, which suggests the missing piece is a time series generation. The parse-where operator is not used here. Wait, re-examining: the snippet shows | followed by a blank. The options are extend, parse-where, where. The correct selection here is extend because after the make-series operator, the next step is to extend the timestamp column. The extend timestamp = totime(EventSubmissionTimestamp[0]) is needed to make the result compatible with Sentinel entity mapping.
Fourth dropdown: AccountCustomEntity = Caller
To automatically associate the security principal with a Sentinel entity, the query must map the Caller field (which contains the UPN or object ID of the principal who performed the operation) to the built-in entity field AccountCustomEntity. This is the standard method for entity mapping in Sentinel analytics/hunting queries.
Incorrect Option:
AuditLogs: This table contains Azure AD sign-in and audit logs, not Azure resource management operations.
AzureDiagnostics: This table contains resource logs emitted by Azure services (like NSG flow logs), not control plane operations like rule changes.
in^ vs ==: The filter uses in^ which is correct for matching multiple operation names, but the specific selection in the dropdown is the operator, not the value.
parse-where: This operator is used to extract structured data from string columns. It is not applicable for timestamp extension or entity mapping.
| extend: This is correct for adding calculated columns, including the timestamp required for Sentinel time charts and the entity mapping.
Reference:
Microsoft Learn, "AzureActivity table schema" and "Best practices for hunting queries in Azure Sentinel." The AzureActivity table records all control plane operations, and AccountCustomEntity = Caller is the documented method for account entity mapping.
You are informed of an increase in malicious email being received by users.
You need to create an advanced hunting query in Microsoft 365 Defender to identify
whether the accounts of the email recipients were compromised. The query must return the
most recent 20 sign-ins performed by the recipients within an hour of receiving the known
malicious email.
How should you complete the query? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Explanation:
This question requires building an advanced hunting query in Microsoft 365 Defender that identifies users who received malicious emails and then checks if those same users logged in within 60 minutes of receiving the email. The query needs to select the correct tables for both email data and identity logon events, and then join them on the user account identifier.
Correct Option:
First dropdown (MaliciousEmails table): EmailEvents
The EmailEvents table contains information about email delivery, including the recipient, sender, subject, and importantly, malware filter verdicts. To identify emails with malware, you must query the EmailEvents table where MalwareFilterVerdict == "Malware".
Second dropdown (MaliciousEmails project): AccountName = tostring(split(RecipientEmailAddress, "@")[0])
This is already correctly written in the query. The split() function extracts the username portion from the recipient's email address (the part before the @ symbol) to create a simplified AccountName field that can be joined with the IdentityLogonEvents table, which typically stores usernames (UPN) rather than full email addresses.
Third dropdown (Join table): IdentityLogonEvents
The IdentityLogonEvents table contains interactive and non-interactive sign-in events from Azure AD and Active Directory. To identify account compromise (sign-ins after receiving malicious email), this is the correct table to join with the malicious email recipients.
Fourth dropdown (Join project): project LogonTime = Timestamp, AccountName, DeviceName
The join must project the relevant fields from the IdentityLogonEvents table. The Timestamp is aliased as LogonTime, and the AccountName field is required as the join key. DeviceName is included for additional context about where the sign-in occurred.
Incorrect Option:
EmailAttachmentInfo:
This table contains information about email attachments, including file names and hashes. While useful for hunting based on attachment properties, it does not contain the MalwareFilterVerdict field. Malware verdicts are stored in the EmailEvents table.
EmailEvents (for the join):
The query already uses EmailEvents to get the malicious email data. Joining it again would be redundant and would not provide sign-in information.
IdentityLogonEvents (for MaliciousEmails):
This table contains sign-in events, not email data. It cannot be used to identify emails with malware.
DeviceName (in the wrong projection):
The DeviceName field is correctly included in the join projection for context, but the primary purpose of the join is to link based on AccountName and calculate the time difference.
Reference:
Microsoft Learn, "Advanced hunting schema reference for Microsoft 365 Defender." Documentation specifies:
EmailEvents contains MalwareFilterVerdict field
IdentityLogonEvents contains Azure AD sign-in events
Joining these tables on user principal names is the standard pattern for identifying post-delivery compromise
Your company uses line-of-business apps that contain Microsoft Office VBA macros.
You plan to enable protection against downloading and running additional payloads from
the Office VBA macros as additional child processes.
You need to identify which Office VBA macros might be affected.
Which two commands can you run to achieve the goal? Each correct answer presents a
complete solution.
NOTE: Each correct selection is worth one point.
A. Option A
B. Option B
C. Option C
D. Option D
C. Option C
Explanation:
The goal is to identify which Office VBA macros might be affected by a planned ASR rule, not to enable or enforce the rule. To identify impact, you must run the rule in Audit mode, which logs events without blocking them. The correct rule GUID for "Block Office applications from creating child processes" is D4F940AB-401B-4EFC-AADC-AD5F3C50688A.
Correct Option:
B. Set-MpPreference -AttackSurfaceReductionRules_Ids D4F940AB-401B-4EFC-AADC-AD5F3C50688A -AttackSurfaceReductionRules_Actions AuditMode
Set-MpPreference overwrites the entire ASR rule configuration. This command sets the specified rule to AuditMode, which will log detection events without blocking. This allows you to identify which macros would be affected if the rule were enabled.
C. Add-MpPreference -AttackSurfaceReductionRules_Ids D4F940AB-401B-4EFC-AADC-AD5F3C50688A -AttackSurfaceReductionRules_Actions AuditMode
Add-MpPreference appends to the existing ASR rule configuration without overwriting other rules. This command also sets the specified rule to AuditMode, achieving the same goal of logging without blocking. Both B and C are valid because they place the rule in audit mode, enabling impact assessment.
Incorrect Option:
A. Add-MpPreference ... Actions Enabled
This command enables (blocks) the ASR rule. It does not help identify which macros might be affected; it immediately enforces blocking, which could disrupt business operations without prior testing.
D. Set-MpPreference ... Actions Enabled
This command also enables (blocks) the ASR rule and overwrites existing configuration. Like option A, it enforces the rule rather than auditing it, failing the requirement to identify affected macros.
Reference:
Microsoft Learn, "Enable Attack Surface Reduction rules" and "Audit ASR rules." Documentation specifies that to test impact, you configure the rule to Audit mode using either Set-MpPreference (overwrite) or Add-MpPreference (append) with -AttackSurfaceReductionRules_Actions AuditMode.
| Page 3 out of 16 Pages |