You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company's website. You need to be able to search for videos based on who is present in the video. What should you do?
Video Indexer supports multiple Person models per account. Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with. Note: Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. Once you label a face with a name, the face and name get added to your account's Person model. Video Indexer will then recognize this face in your future videos and past videos. Reference: https://docs.microsoft.com/en-us/azure/media-services/video-indexer/customize-person-model-with-api
Question 72
You use the Custom Vision service to build a classifier. After training is complete, you need to evaluate the classifier. Which two metrics are available for review? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Custom Vision provides three metrics regarding the performance of your model: precision, recall, and AP. Reference: https://www.tallan.com/blog/2020/05/19/azure-custom-vision/
Question 73
DRAG DROP - You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images. How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: LargeFaceListID - LargeFaceList: Add a face to a specified large face list, up to 1,000,000 faces. Note: Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. A "faceListId" is created by FaceList - Create containing persistedFaceIds that will not expire. And a "largeFaceListId" is created by LargeFaceList - Create containing persistedFaceIds that will also not expire. Incorrect Answers: Not "faceListId": Add a face to a specified face list, up to 1,000 faces. Box 2: matchFace - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces. Reference: https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar
Question 74
HOTSPOT - You are developing an application to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint. The application has the following code.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
DRAG DROP - You are developing a photo application that will find photos of a person based on a sample image by using the Face API. You need to create a POST request to find the photos. How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: detect - Face - Detect With Url: Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. POST {Endpoint}/face/v1.0/detect Box 2: matchPerson - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces. Reference: https://docs.microsoft.com/en-us/rest/api/faceapi/face/detectwithurl https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar
Question 76
HOTSPOT - You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Yes - Box 2: Yes - Coordinates of a rectangle in the API refer to the top left corner. Box 3: No - Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection
Question 77
HOTSPOT - You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Yes - Box 2: Yes - Coordinates of a rectangle in the API refer to the top left corner. Box 3: No - Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection
Question 78
HOTSPOT - You develop an application that uses the Face API. You need to add multiple images to a person group. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Stream - The File.OpenRead(String) method opens an existing file for reading. Example: Open the stream and read it back. using (FileStream fs = File.OpenRead(path)) Box 2: CreateAsync - Create the persons for the PersonGroup. Persons are created concurrently. Example: await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName); Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces
Question 79
Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code.
You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces. You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces. What should you do?
Evaluate different models. The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the Face - Detect API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns. The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
You have the following Python function for creating Azure Cognitive Services resources programmatically. def create_resource (resource_name, kind, account_tier, location) : parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties={}) result = client.accounts.create(resource_group_name, resource_name, parameters) You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use?
F0 is the free tier. Custom Vision Service - Upload images to train and customize a computer vision model for your specific use case. Once the model is trained, you can use the API to tag images using the model and evaluate the results to improve your classifier. Incorrect: Not C, not D: S0 is the standard tier, which isn't free. Not A, not C: The Computer Vision service provides developers with access to advanced algorithms for processing images and returning information. Computer Vision - Returns information about visual content found in an image: Use tagging, descriptions, and domain-specific models to identify content and label it with confidence. Apply adult/racy settings to enable automated restriction of adult content. Identify image types and color schemes in pictures. Reference: https://docs.microsoft.com/en-us/python/api/overview/azure/cognitive-services?view=azure-python