Google Professional-Machine-Learning Exam

Questions Number: 319 out of 339 Questions
94.10%

Question 319
You work as an ML researcher at an investment bank, and you are experimenting with the Gemma large language model (LLM). You plan to deploy the model for an internal use case. You need to have full control of the mode's underlying infrastructure and minimize the model's inference time. Which serving configuration should you use for this task?







Previous Questions Next Questions



Premium Version