You need to build an image tagging solution for social media that tags images of your friends automatically. Which Azure Cognitive Services service should you use?
DRAG DROP - Match the types of computer vision workloads to the appropriate scenarios. To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all. NOTE: Each correct selection is worth one point.
Question 103
DRAG DROP - Match the facial recognition tasks to the appropriate questions. To answer, drag the appropriate task from the column on the left to its question on the right. Each task may be used once, more than once, or not at all. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: verification - Identity verification - Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be. Box 2: similarity - The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image. The service supports two working modes, matchPerson and matchFace. The matchPerson mode returns similar faces after filtering for the same person by using the Verify API. The matchFace mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person. Box 3: identification - Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/face/overview
Question 104
Which Computer Vision feature can you use to generate automatic captions for digital photographs?
A. Recognize text.
B. Identify the areas of interest.
C. Detect objects.
D. Describe the images.
Describe images with human-readable language Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence. The image description feature is part of the Analyze Image API. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-describing-images
Question 105
Which service should you use to extract text, key/value pairs, and table data automatically from scanned documents?
A. Custom Vision
B. Face
C. Form Recognizer
D. Language
Form Recognizer applies advanced machine learning to accurately extract text, key-value pairs, tables, and structures from documents. Reference: https://azure.microsoft.com/en-us/services/form-recognizer/
Question 106
HOTSPOT - Select the answer that correctly completes the sentence. Hot Area:
Handwriting OCR (optical character recognition) is the process of automatically extracting handwritten information from paper, scans and other low-quality digital documents. Reference: https://vidado.ai/handwriting-ocr
Question 107
You are developing a solution that uses the Text Analytics service. You need to identify the main talking points in a collection of documents. Which type of natural language processing should you use?
A. entity recognition
B. key phrase extraction
C. sentiment analysis
D. language detection
Broad entity extraction: Identify important concepts in text, including key Key phrase extraction/ Broad entity extraction: Identify important concepts in text, including key phrases and named entities such as people, places, and organizations. Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-language-processing
Question 108
Which AI service can you use to interpret the meaning of a user input such as `Call me back later?`
A. Translator
B. Text Analytics
C. Speech
D. Language Understanding (LUIS)
Language Understanding (LUIS) is a cloud-based AI service, that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/what-is-luis
Question 109
In which two scenarios can you use speech recognition? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. an in-car system that reads text messages aloud
B. providing closed captions for recorded or live videos
C. creating an automated public address system for a train station
D. creating a transcript of a telephone call or meeting