Developing Solutions for Microsoft Azure - Knowledge Check - Part2

71. You are developing an Azure Function App by using Visual Studio. The app will process orders input by an Azure Web App. The web app places the order information into Azure Queue Storage.
You need to review the Azure Function App code shown below.

public static class OrderProcessor
{
    [FunctionName("ProcessOrders")]
    public static void ProcessOrders([QueueTrigger("incoming-orders")]CloudQueueMessage queueItem, [Table("Orders")]ICollector<Order> tableBindings, TraceWriter log)
    {
    log.Info($"Processing Order: {queueItem.Id}");
    log.Info($"Queue Insertion Time: {queueItem.InsertionTime}");
    log.Info($"Queue Expiration Time: {queueItem.ExpirationTime}");
    tableBindings.Add(JsonConvert.DeserializeObject<Order>(queueItem.AsString));
    }

    [FunctionName("ProcessOrders-Fail")]
    public static void ProcessFailedOrders([QueueTrigger("incoming-orders-fail")]CloudQueueMessage queueItem, TraceWriter log)
    {
    log.Error($"Failed to process order: {queueItem.AsString}");
      . . .
    }

}

Which of the following statements are true?
a. The code will log the time that the order was processed from the queue.
b. When the ProcessOrders function fails, the function will retry up to five times for a given order, including the first try.
c. When there are multiple orders in the queue, a batch of orders will be retrieved from the queue and the ProcessOrders function will run multiple instances concurrently to process the orders.
d. The processOrders function will output the order to an Orders table in Azure Table Storage.

72. You are developing a solution for a hospital to support the following use cases:
- The most recent patient status details must be retrieved even if multiple users in different locations have updated the patient record.
- Patient health monitoring data retrieved must be the current version or the prior version.
- After a patient is discharged and all charges have been assessed, the patient billing record contains the final charges.
You provision a Cosmos DB NoSQL database and set the default consistency level for the database account to Strong. You set the value for Indexing Mode to
Consistent.
You need to minimize latency and any impact to the availability of the solution. You must override the default consistency level at the query level to meet the required consistency guarantees for the scenarios.
Which consistency levels should you implement? Each consistency level may be used once, more than once, or not at all.

--> Return the most recent patient status
--> Return the health monitoring data that is no less than one version behind
--> After patient is discharged and all charges are assessed, retrieve the correct billing data with the final charges.

a. Strong, Bounded Staleness, Eventual
b. Consistent Prefix, Strong, Bounded Staleness
c. Eventual, Strong, Consistent Prefix
d. Bounded Staleness, Eventual, Strong

73. You are configuring a development environment for your team. You deploy the latest Visual Studio image from the Azure Marketplace to your Azure subscription.
The development environment requires several software development kits (SDKs) and third-party components to support application development across the organization. You install and customize the deployed virtual machine (VM) for your development team. The customized VM must be saved to allow provisioning of a new team member development environment.
You need to save the customized VM for future provisioning.
Which tools or services should you use to

- Generalize the VM
- Store images
a. Azure Powershell, Azure Blob Storage
b. Visual Studio Command Prompt, Azure Data Lake Storage
c. Azure Migrate, Azure File Storage
d. Azure Backup, Azure Table Storage

74. A company uses Azure API Management to expose some of its services. Each developer consuming APIs must use a single key to obtain access to various APIs without requiring approval from the API publisher. Which solution should you recommend?
a. Define a subscription with all APIs scope.
b. Define a subscription with product scope.
c. Restrict access based on caller IPs.
d. Restrict APIs based on client certificate.

**75. You manage an Azure API Management instance. You need to limit the maximum number of API calls allowed from a single source for a specific time interval. What should you configure?
**a. Product
b. Policy
c. Subscription
d. API

Answers:
71-b,c,d
a - false - The code is logging the item's queue insertion time and expiration time, it is not logging the time when the item is being processed.
b - true - maxDequeueCount - The number of times to try processing a message before moving it to the poison queue. Default value is 5. Check this
c - true - When there are multiple queue messages waiting, the queue trigger retrieves a batch of messages and invokes function instances concurrently to process them. By default, the batch size is 16. When the number being processed gets down to 8, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function on one virtual machine (VM) is 24. Check this
d - true - we can see the line of code that is doing this - tableBindings.Add(JsonConvert.DeserializeObject<Order>(queueItem.AsString));

72-a Strong, Bounded Staleness, Eventual
Return the most recent patient status --> Strong Consistency is needed here, as it offers a linearizability guarantee. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write.
Return the health monitoring data that is no less than one version behind --> Bounded Staleness is sufficient here. The reads might lag behind writes by at most "K" versions (that is "updates") of an item or by "t" time interval.
After patient is discharged and all charges are assessed, retrieve the correct billing data with the final charges --> Eventual Consistency is enough here, as there is no urgency. In this case, the final version of data is eventually saved to all replicas.
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

73-a Azure Powershell, Azure Blob Storage
You need Powershell to create an image of a VM, and then use Blob Storage to store that image
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource#create-an-image-of-a-vm-using-powershell

74-b When creating a product, several APIs can be added to the product and a subscription can be associated with it. Access should not be granted to all APIs (so option a is incorrect). Developer access should be granted regardless of the caller IP (so option c is incorrect). A client certificate would require a policy to validate the certificate and specific logic to map the client to specific APIs (so option d is incorrect). More info here - https://memorycrypt.hashnode.dev/implement-api-management-developing-solutions-for-microsoft-azure-part-51#heading-subscriptions-and-keys

75-b API publishers can change API behavior through configuration using policies. Policies are a collection of statements that run sequentially on the request or response of an API. More info here - https://memorycrypt.hashnode.dev/implement-api-management-developing-solutions-for-microsoft-azure-part-51?t=1681878236378#heading-rate-limit
A product has one or more APIs, a usage quota, and the terms of use and cannot be used to restrict the number of API calls (so option a is incorrect).
Subscriptions are the most common way for API consumers to access APIs published through an API Management instance (so option c is incorrect).
API is a representation of a back-end API and needs to be configured with a policy to implement a rate limit (so option d is incorrect).

76. You have an Azure event hub. You need to add partitions to the event hub. Which code segment should you use?
a. az eventhubs eventhub consumer-group update --resource-group MyResourceGroupName --namespace-name MyNamespaceName --eventhub-name MyEventHubName --set partitioncount=12
b. az eventhubs eventhub consumer-group create --resource-group MyResourceGroupName --namespace-name MyNamespaceName --eventhub-name MyEventHubName --set partitioncount=12
c. az eventhubs eventhub update --resource-group MyResourceGroupName --namespace-name MyNamespaceName --name MyEventHubName --partition-count 12
d. az eventhubs eventhub create --resource-group MyResourceGroupName --namespace-name MyNamespaceName --name MyEventHubName --partition-count 12

77. You plan to implement event routing in your Azure subscription by using Azure Event Grid. An event is generated each time an Azure resource is deleted. A message corresponding to the event is automatically displayed in an Azure App Service web app you deployed into the same Azure subscription.
You create a custom topic. You need to subscribe to the custom topic. What should you do first?

a. Create an endpoint.
b. Create an event handler.
c. Enable the Azure Event Grid resource provider.
d. Configure filtering.

78. You have an Azure Service Bus instance. You need to provide first-in, first-out (FIFO) guarantee for message processing. What should you configure?
a. dead-letter queue
b. message deferral
c. message sessions
d. scheduled delivery

79. You create an Azure Service Bus topic with a default message time to live of 10 minutes. You need to send messages to this topic with a time to live of 15 minutes. The solution must not affect other applications that are using the topic. What should you recommend?
a. Change the topic’s default time to live to 15 minutes.
b. Change the specific message’s time to live to 15 minutes.
c. Create a new topic with a default time to live of 15 minutes. Send the messages to this topic.
d. Update the time to live for the queue containing the topic.

80. You are developing a .NET project that will manage messages in Azure Storage queues. You need to verify the presence of messages in a queue without removing them from the queue. Which method should you use?
a. Peek
b. PeekMessages
c. ReceiveMessages
d. ReceiveMessageAsync

Answers:

76-c The code segment in option c includes az eventhubs eventhub update that adds partitions to an existing event hub. The code segment in option a includes az eventhubs eventhub consumer-group update that updates the event hub consumer group (hence, incorrect). The code segment in option b includes az eventhubs eventhub consumer-group create that will create an event hub consumer group. The code segment in option d includes az eventhubs eventhub create --resource-group segment that will create an event hub with partitions, not change an existing one. More info https://memorycrypt.hashnode.dev/event-based-solutions-developing-solutions-for-microsoft-azure-part-52?t=1681879001580#heading-dynamically-add-partitions-to-an-event-hub
77-a Before subscribing to the custom topic, you need to create an endpoint for event messages. (option b) The Azure App Service web app acts as the event handler in this case, so this task is already completed. (option c) The Azure Event Grid resource provider is already enabled at this point because this is a prerequisite for creating a custom topic. (option d) Event filtering is part of configuring an event subscription, so it takes place either during or after provisioning of the subscription. More info here - https://memorycrypt.hashnode.dev/event-based-solutions-developing-solutions-for-microsoft-azure-part-52?t=1681962662931#heading-concepts-in-azure-event-grid
78-c To provide FIFO guarantees in Service Bus, sessions must be configured. Message sessions enable exclusive, ordered handling of unbounded sequences of related messages. (option a) A dead-letter queue holds messages that cannot be delivered to any receiver. (option b) Message deferral makes it possible to defer retrieval of a message until a later time. (option d) Scheduled delivery allows submitting messages to a queue or topic for delayed processing. A dead-letter queue, message deferral, and scheduled delivery do not provide FIFO guarantees. More info - https://memorycrypt.hashnode.dev/message-based-solutions-developing-solutions-for-microsoft-azure-part-53?t=1681963364646#heading-advanced-features
79-c To avoid affecting existing applications, the time to live of the existing topic must not be changed. A new topic needs to be created. (option a) Changing the topic's default time to live will affect other applications. (option b) A message-level time to live cannot be higher than the topic's time to live. (option d) To avoid affecting existing applications, the time to live of the existing topic or queue must not be changed.
80-b Messages can be peeked at in the queue without removing them from the queue by calling the PeekMessages method of the QueueClient class. (option a) The Peek method of the QueueClient class is used with Azure Service Bus, not Azure Queue Storage. (option c) The ReceiveMessages method of the QueueClient class removes them from the queue. (option d) The ReceiveMessageAsync method of the QueueClient class is used with Azure Service Bus, not Azure Queue Storage.

QueueClient queueClient = new QueueClient(connectionString, queueName);

if (queueClient.Exists())
{ 
    // Peek at the next message
    PeekedMessage[] peekedMessage = queueClient.PeekMessages();
}

81. You have an application that requires message queuing. You need to recommend a solution that meets the following requirements:
- automatic duplicate message detection.
- ability to send 2 MB messages.
Which message queuing solution should you recommend?

a. Azure Service Bus Premium tier
b. Azure Service Bus Standard tier
c. Azure Storage queues with locally redundant storage (LRS)
d. Azure Storage queues with zone-redundant storage (ZRS)

82. You plan to use a shared access signature to protect access to services within a general-purpose v2 storage account. You need to identify the type of service that you can protect by using the user delegation shared access signature. Which service should you identify?
a. Blob
b. File
c. Queue
d. Table

Answers:

81-a Service Bus detects duplicate messages. The Premium tier is required to send messages larger than 256 KB. (option b) Although Service Bus detects duplicate messages, the Standard tier only supports messages that are up to 256 KB in size. (option c) Azure Storage queues do not support duplicate message detection. (option d) Azure Storage queues do not support duplicate message detection.