Weekend Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Google Associate-Cloud-Engineer Dumps

Google Cloud Certified - Associate Cloud Engineer Questions and Answers

Question 1

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

Options:

A.

Use the BigQuery interface to review the nightly Job and look for any errors

B.

Review the Error Reporting page in the Cloud Console to find any errors.

C.

In Cloud Logging create a filter for your Data Studio report

D.

Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed correctly.

Question 2

You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0/20. and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do?

Options:

A.

Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22.

B.

Change the subnet IP range from 10.0 0.0/20 to 10.0.0.0718.

C.

Add a secondary IP range 10.1.0.0/20 to the subnet.

D.

Convert the subnet IP range from IPv4 to IPv6

Question 3

You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?

Options:

A.

Use Binary Authorization and whitelist only the container images used by your customers’ Pods.

B.

Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.

C.

Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.

D.

Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Question 4

You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You want to allow all engineers to create new projects without asking them for their credit card information. What should you do?

Options:

A.

Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.

B.

Grant all engineer’s permission to create their own billing accounts for each new project.

C.

Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.

D.

Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.

Question 5

Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?

Options:

A.

• Add a step for human approval to the CI/CD pipeline before the execution of the infrastructureprovisioning.• Use the human approvals IAM account for the provisioning.

B.

• Attach a single service account to the compute instances.• Add minimal rights to the service account.• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.

C.

• Attach a single service account to the compute instances.• Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources

D.

• Create multiple service accounts, one for each pipeline with the appropriate minimal Identity andAccess Management (IAM) permissions.• Use a secret manager service to store the key files of the service accounts.• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.

Question 6

You have been asked to create robust Virtual Private Network (VPN) connectivity between a new Virtual Private Cloud (VPC) and a remote site. Key requirements include dynamic routing, a shared address space of 10.19.0.1/22, and no overprovisioning of tunnels during a failover event. You want to follow Google-recommended practices to set up a high availability Cloud VPN. What should you do?

Options:

A.

Use a custom mode VPC network, configure static routes, and use active/passive routing

B.

Use an automatic mode VPC network, configure static routes, and use active/active routing

C.

Use a custom mode VPC network use Cloud Router border gateway protocol (86P) routes, and use active/passive routing

D.

Use an automatic mode VPC network, use Cloud Router border gateway protocol (BGP) routes and configure policy-based routing

Question 7

You have designed a solution on Google Cloud Platform (GCP) that uses multiple GCP products. Your company has asked you to estimate the costs of the solution. You need to provide estimates for the monthly total cost. What should you do?

Options:

A.

For each GCP product in the solution, review the pricing details on the products pricing page. Use the pricing calculator to total the monthly costs for each GCP product.

B.

For each GCP product in the solution, review the pricing details on the products pricing page. Create a Google Sheet that summarizes the expected monthly costs for each product.

C.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Navigate to the Billing Report page in the Google Cloud Platform Console. Multiply the 1 week cost to determine the monthly costs.

D.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Use Stackdriver to determine the provisioned and used resource amounts. Multiply the 1 week cost to determine the monthly costs.

Question 8

During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

Options:

A.

Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.

B.

Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.

C.

Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.

D.

Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.

Question 9

You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston, MA (United States). What should you do?

Options:

A.

Configure regional storage for the region closest to the users Configure a Nearline storage class

B.

Configure regional storage for the region closest to the users Configure a Standard storage class

C.

Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class

D.

Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

Question 10

(You manage a VPC network in Google Cloud with a subnet that is rapidly approaching its private IP address capacity. You expect the number of Compute Engine VM instances in the same region to double within a week. You need to implement a Google-recommended solution that minimizes operational costs and does not require downtime. What should you do?)

Options:

A.

Create a second VPC with the same subnet IP range, and connect this VPC to the existing VPC by using VPC Network Peering.

B.

Delete the existing subnet, and create a new subnet with double the IP range available.

C.

Use the Google Cloud CLI tool to expand the primary IP range of your subnet.

D.

Permit additional traffic from the expected range of private IP addresses to reach your VMs by configuring firewall rules.

Question 11

You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do?

Options:

A.

Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP).

B.

Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10.

C.

Create a managed instance group. Set the Autohealing health check to healthy (HTTP).

D.

Create a managed instance group. Verify that the autoscaling setting is on.

Question 12

You need to manage a third-party application that will run on a Compute Engine instance. Other Compute Engine instances are already running with default configuration. Application installation files are hosted on Cloud Storage. You need to access these files from the new instance without allowing other virtual machines (VMs) to access these files. What should you do?

Options:

A.

Create the instance with the default Compute Engine service account Grant the service account permissions on Cloud Storage.

B.

Create the instance with the default Compute Engine service account Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

C.

Create a new service account and assig n this service account to the new instance Grant the service account permissions on Cloud Storage.

D.

Create a new service account and assign this service account to the new instance Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

Question 13

You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices. What should you do?

Options:

A.

Create a service account with an access scope. Use the access scope ‘https://www.googleapis.com/auth/devstorage.write_only’.

B.

Create a service account with an access scope. Use the access scope ‘https://www.googleapis.com/auth/cloud-platform’.

C.

Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.

D.

Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.

Question 14

You are running a web application on Cloud Run for a few hundred users. Some of your users complain that the initial web page of the application takes much longer to load than the following pages. You want to follow Google's recommendations to mitigate the issue. What should you do?

Options:

A.

Update your web application to use the protocol HTTP/2 instead of HTTP/1.1

B.

Set the concurrency number to 1 for your Cloud Run service.

C.

Set the maximum number of instances for your Cloud Run service to 100.

D.

Set the minimum number of instances for your Cloud Run service to 3.

Question 15

You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems. Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended practices. What should you suggest?

Options:

A.

Cloud Run for Anthos

B.

Cloud Run

C.

App Engine Standard

D.

Cloud Functions.

Question 16

Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment What should you do?

Options:

A.

Create a bash script that contains all requirement steps as gcloud commands

B.

Develop templates for the environment using Cloud Deployment Manager

C.

Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.

D.

Use the Cloud Console interface to provision and manage all related resources

Question 17

You have an application that is currently processing transactions by using a group of managed VM instances. You need to migrate the application so that it is serverless and scalable. You want to implement an asynchronous transaction processing system, while minimizing management overhead. What should you do?

Options:

A.

Install Kafka on VM instances to acknowledge incoming transactions. Use Cloud Run to process transactions.

B.

Install Kafka on VM Instances to acknowledge incoming transactions. Use VM Instances to process transactions.

C.

Use Pub/Sub to acknowledge incoming transactions. Use VM instances to process transactions.

D.

Use Pub/Sub to acknowledge incoming transactions. Use Cloud Run to process transactions.

Question 18

The sales team has a project named Sales Data Digest that has the ID acme-data-digest You need to set up similar Google Cloud resources for the marketing team but their resources must be organized independently of the sales team. What should you do?

Options:

A.

Grant the Project Editor role to the Marketing learn for acme data digest

B.

Create a Project Lien on acme-data digest and then grant the Project Editor role to the Marketing team

C.

Create another protect with the ID acme-marketing-data-digest for the Marketing team and deploy the resources there

D.

Create a new protect named Meeting Data Digest and use the ID acme-data-digest Grant the Project Editor role to the Marketing team.

Question 19

You have just created a new project which will be used to deploy a globally distributed application. You will use Cloud Spanner for data storage. You want to create a Cloud Spanner instance. You want to perform the first step in preparation of creating the instance. What should you do?

Options:

A.

Grant yourself the IAM role of Cloud Spanner Admin

B.

Create a new VPC network with subnetworks in all desired regions

C.

Configure your Cloud Spanner instance to be multi-regional

D.

Enable the Cloud Spanner API

Question 20

Your organization has user identities in Active Directory. Your organization wants to use Active Directory as their source of truth for identities. Your organization wants to have full control over the Google accounts used by employees for all Google services, including your Google Cloud Platform (GCP) organization. What should you do?

Options:

A.

Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.

B.

Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.

C.

Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.

D.

Ask each employee to create a Google account using self signup. Require that each employee use their company email address and password.

Question 21

Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google’s recommended best practices. What should you do?

Options:

A.

Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.

B.

Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.

C.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.

D.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Question 22

You are given a project with a single virtual private cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in thissubnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do?

Options:

A.

1. Create a subnetwork in the same VPC, in europe-west1.2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

B.

1. Create a VPC and a subnetwork in europe-west1.2. Expose the application with an internal load balancer.3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint.

C.

1. Create a subnetwork in the same VPC, in europe-west1.2. Use Cloud VPN to connect the two subnetworks.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

D.

1. Create a VPC and a subnetwork in europe-west1.2. Peer the 2 VPCs.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

Question 23

You have successfully created a development environment in a project for an application. This application uses Compute Engine and Cloud SQL. Now, you need to create a production environment for this application.

The security team has forbidden the existence of network routes between these 2 environments, and asks you to follow Google-recommended practices. What should you do?

Options:

A.

Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the setup you have created in the development environment.

B.

Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your existing project, and deploy your application using those resources.

C.

Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project, and replicate the setup you have in the development environment in that new project, in the Shared VPC.

D.

Ask the security team to grant you the Project Editor role in an existing production project used by another division of your company. Once they grant you that role, replicate the setup you have in the development environment in that project.

Question 24

Your company is moving from an on-premises environment to Google Cloud Platform (GCP). You have multiple development teams that use Cassandra environments as backend databases.They all need a development environment that is isolated from other Cassandra instances. You want to move to GCP quickly and with minimal support effort. What should you do?

Options:

A.

1. Build an instruction guide to install Cassandra on GCP.2. Make the instruction guide accessible to your developers.

B.

1. Advise your developers to go to Cloud Marketplace.2. Ask the developers to launch a Cassandra image for their development work.

C.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2. Use the snapshot to create instances for your developers.

D.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2.Upload the snapshot to Cloud Storage and make it accessible to your developers.3.Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves.

Question 25

You manage an App Engine Service that aggregates and visualizes data from BigQuery. The application is deployed with the default App Engine Service account. The data that needs to be visualized resides in a different project managed by another team. You do not have access to this project, but you want your application to be able to read data from the BigQuery dataset. What should you do?

Options:

A.

Ask the other team to grant your default App Engine Service account the role of BigQuery Job User.

B.

Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer.

C.

In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer.

D.

In Cloud IAM of your project, grant a newly created service account from the other team the role of BigQuery Job User in your project.

Question 26

Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you do?

Options:

A.

Create the instance without a public IP address.

B.

Create the instance with Private Google Access enabled.

C.

Create a deny-all egress firewall rule on the VPC network.

D.

Create a route on the VPC to route all traffic to the instance over the VPN tunnel.

Question 27

You will have several applications running on different Compute Engine instances in the same project. You want to specify at a more granular level the service account each instance uses when calling Google Cloud APIs. What should you do?

Options:

A.

When creating the instances, specify a Service Account for each instance

B.

When creating the instances, assign the name of each Service Account as instance metadata

C.

After starting the instances, use gcloud compute instances update to specify a Service Account for each instance

D.

After starting the instances, use gcloud compute instances update to assign the name of the relevant Service Account as instance metadata

Question 28

You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30 days. After 30 days, the objects are not read again unless there is a special need. The object should be kept for three years, and you need to minimize cost. What should you do?

Options:

A.

Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.

B.

Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.

C.

Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.

D.

Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.

Question 29

Your application stores files on Cloud Storage by using the Standard Storage class. The application only requires access to files created in the last 30 days. You want to automatically save costs on files that are no longer accessed by the application. What should you do?

Options:

A.

Create a retention policy on the storage bucket of 30 days, and lock the bucket by using a retention policy lock.

B.

Enable object versioning on the storage bucket and add lifecycle rules to expire non-current versions after 30 days

C.

Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days.

D.

Create a cron job in Cloud Scheduler to call a Cloud Functions instance every day to delete files older than 30 days.

Question 30

(You have an application running inside a Compute Engine instance. You want to provide the application with secure access to a BigQuery dataset. You must ensure that credentials are only valid for a short period of time, and your application will only have access to the intended BigQuery dataset. You want to follow Google-recommended practices and minimize your operational costs. What should you do?)

Options:

A.

Attach a custom service account to the instance, and grant the service account the BigQuery Data Viewer IAM role on the project.

B.

Attach a new service account to the instance every hour, and grant the service account the BigQuery Data Viewer IAM role on the dataset.

C.

Attach a custom service account to the instance, and grant the service account the BigQuery Data Viewer IAM role on the dataset.

D.

Attach a new service account to the instance every hour, and grant the service account the BigQuery Data Viewer IAM role on the project.

Question 31

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects. What should you do?

Options:

A.

Assign the finance team only the Billing Account User role on the billing account.

B.

Assign the engineering team only the Billing Account User role on the billing account.

C.

Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

D.

Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Question 32

You are planning to migrate your on-premises VMs to Google Cloud. You need to set up a landing zone in Google Cloud before migrating the VMs. You must ensure that all VMs in your production environment can communicate with each other through private IP addresses. You need to allow all VMs in your Google Cloud organization to accept connections on specific TCP ports. You want to follow Google-recommended practices, and you need to minimize your operational costs. What should you do?

Options:

A.

Create individual VPCs per Google Cloud project. Peer all the VPCs together. Apply organization policies on the organization level.

B.

Create individual VPCs for each Google Cloud project. Peer all the VPCs together. Apply hierarchical firewall policies on the organization level.

C.

Create a host VPC project with each production project as its service project. Apply organization policies on the organization level.

D.

Create a host VPC project with each production project as its service project. Apply hierarchical firewall policies on the organization level.

Question 33

Your web application is hosted on Cloud Run and needs to query a Cloud SOL database. Every morning during a traffic spike, you notice API quota errors in Cloud SOL logs. The project has already reached the maximum API quota. You want to make a configuration change to mitigate the issue. What should you do?

Options:

A.

Modify the minimum number of Cloud Run instances.

B.

Set a minimum concurrent requests environment variable for the application.

C.

Modify the maximum number of Cloud Run instances.

D.

Use traffic splitting.

Question 34

You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and files in Cloud Storage. You want to follow Google-recommended practices. Which IAM roles should you grant your colleagues?

Options:

A.

Project Editor

B.

Storage Admin

C.

Storage Object Admin

D.

Storage Object Creator

Question 35

Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do?

Options:

A.

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.

B.

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.

C.

1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.

D.

1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.

Question 36

Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed lo your organization. You want to accomplish this task with minimal effort. What should you do?

Options:

A.

Use the projects. move method to move the project to your organization. Update the billing account of the project to that of your organization.

B.

Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup's Google Cloud organization, and drag the project to your company's organization.

C.

Create a Private Catalog tor the Google Cloud Marketplace, and upload the resources of the startup’s production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your company’s project.

D.

Create an infrastructure-as-code template tor all resources in the project by using Terraform. and deploy that template to a new project in your organization. Delete the protect from the startup's Google Cloud organization.

Question 37

You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project. What should you do?

Options:

A.

Enable Private Service Access on the Cloud Storage Bucket.

B.

Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.

C.

Enable Private Google Access on the subnet within the custom VPC.

D.

Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.

Question 38

You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b with VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API services ate deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these Google Cloud projects. You also want to follow Google-recommended practices. What should you do?

Options:

A.

Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b.

B.

Configure a Cloud Interconnect connection between vpc-a and vpc-b.

C.

Create VPC Network Peering between vpc-a and vpc-b.

D.

Create an OpenVPN connection between vpc-a and vpc-b.

Question 39

You have a Bigtable instance that consists of three nodes that store personally identifiable information (Pll) data. You need to log all read or write operations, including any metadata or configuration reads of this database table, in your company's Security Information and Event Management (SIEM) system. What should you do?

Options:

A.

• Navigate to Cloud Mentioning in the Google Cloud console, and create a custom monitoring job for theBigtable instance to track all changes.• Create an alert by using webhook endpoints. with the SIEM endpoint as a receiver

B.

• Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read. Data Write and Admin Read logs for the Bigtable instance• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic.

C.

• Install the Ops Agent on the Bigtable instance during configuration. K• Create a service account with read permissions for the Bigtable instance.• Create a custom Dataflow job with this service account to export logs to the company's SIEM system.

D.

• Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for theBiglable instance.• Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM.

Question 40

You assist different engineering teams in deploying their infrastructure on Google Cloud. Your company has defined certain practices required for all workloads. You need to provide the engineering teams with a solution that enables teams to deploy their infrastructure independently without having to know all implementation details of the company's required practices. What should you do?

Options:

A.

Create a service account per team, and grant the service account the Project Editor role. Ask the teams to provision their infrastructure through the Google Cloud CLI (gcloud CLI), while impersonating their dedicated service account.

B.

Provide training for all engineering teams you work with to understand the company’s required practices. Allow the engineering teams to provision the infrastructure to best meet their needs.

C.

Configure organization policies to enforce your company’s required practices. Ask the teams to provision their infrastructure by using the Google Cloud console.

D.

Write Terraform modules for each component that are compliant with the company’s required practices, and ask teams to implement their infrastructure through these modules.

Question 41

You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

Options:

A.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace

B.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1

C.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace

D.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1

Question 42

You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over that port. What should you do?

Options:

A.

Add the network tag allow-udp-636 to the VM instance running the LDAP server.

B.

Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.

C.

Add a network tag of your choice to the instance. Create a firewall rule to allow ingress on UDP port 636 for that network tag.

D.

Add a network tag of your choice to the instance running the LDAP server. Create a firewall rule to allow egress on UDP port 636 for that network tag.

Question 43

Your coworker has helped you set up several configurations for gcloud. You've noticed that you're running commands against the wrong project. Being new to the company, you haven't yet memorized any of the projects. With the fewest steps possible, what's the fastest way to switch to the correct configuration?

Options:

A.

Run gcloud configurations list followed by gcloud configurations activate .

B.

Run gcloud config list followed by gcloud config activate.

C.

Run gcloud config configurations list followed by gcloud config configurations activate.

D.

Re-authenticate with the gcloud auth login command and select the correct configurations on login.

Question 44

You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps. What should you do?

Options:

A.

Use gcloud iam roles copy and specify the production project as the destination project.

B.

Use gcloud iam roles copy and specify your organization as the destination organization.

C.

In the Google Cloud Platform Console, use the ‘create role from role’ functionality.

D.

In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.

Question 45

Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process. What should you do?

Options:

A.

In the Google Cloud console, visualize the costs related to the projects in the Reports section.

B.

In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.

C.

In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studiodashboard on top of the CSV export.

D.

Configure Cloud Billing data export to BigOuery for the billing account. Create a Looker Studio dashboard on top of the BigQuery export.

Question 46

You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?

Options:

A.

Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.

B.

Encode username and password in sha256 encoding, and save it to a text file. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.

C.

Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.

D.

Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.

Question 47

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?

Options:

A.

• Ensure that the Ops Agent Is Installed on the Compute Engine instance.• Create a custom metric in the Cloud Monitoring dashboard.• Provide the security team member with access to this dashboard.

B.

• Ensure that the Ops Agent is installed on tie Compute Engine instance.• Provide the security team member roles/configure.inventoryViewer permission.

C.

• Ensure that the OS Config agent Is Installed on the Compute Engine instance.• Provide the security team member roles/configure.vulnerabilityViewer permission.

D.

• Ensure that the OS Config agent is installed on the Compute Engine instance• Create a log sink Co a BigQuery dataset.• Provide the security team member with access to this dataset.

Question 48

You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs. What should you do?

Options:

A.

Increase the size of the disk to 1 TB.

B.

Increase the allocated CPU to the instance.

C.

Migrate to use a Local SSD on the instance.

D.

Migrate to use a Regional SSD on the instance.

Question 49

You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1 hour. You want to follow Google recommended practices. What should you do?

Options:

A.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.

B.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///.

C.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.

D.

Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///*

Question 50

You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?

Options:

A.

Deploy a private autopilot cluster

B.

Deploy a public autopilot cluster.

C.

Deploy a standard public cluster and enable shielded nodes.

D.

Deploy a standard private cluster and enable shielded nodes.

Question 51

You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

Options:

A.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.

B.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.

C.

Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.

D.

Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.

Question 52

You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?

Options:

A.

In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.

B.

When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.

C.

Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.

D.

Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Question 53

Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?

Options:

A.

Upload the data to BigQuery using the bq command line tool.

B.

Upload the data to Cloud Storage using the gsutil command line tool.

C.

Upload the data into Cloud SQL using the import function in the console.

D.

Upload the data into Cloud Spanner using the import function in the console.

Question 54

You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is my-project. What should you do?

Options:

A.

Run gcloud projects list to get the project ID, and then run gcloud services list --project .

B.

Run gcloud init to set the current project to my-project, and then run gcloud services list --available.

C.

Run gcloud info to view the account value, and then run gcloud services list --account .

D.

Run gcloud projects describe to verify the project value, and then run gcloud services list --available.

Question 55

You are planning to migrate a database and a backend application to a Standard Google Kubernetes Engine (GKE) cluster. You need to prevent data loss and make sure there are enough nodes available for your backend application based on the demands of your workloads. You want to follow Google-recommended practices and minimize the amount of manual work required. What should you do?

Options:

A.

Run your database as a StatefulSet. Configure cluster autoscaling to handle changes in the demands of your workloads.

B.

Run your database as a single Pod. Run the resize command when you notice changes in the demands of your workloads.

C.

Run your database as a Deployment. Configure cluster autoscaling to handle changes in the demands of your workloads.

D.

Run your database as a DaemonSet. Run the resize command when you notice changes in the demands of your workloads.

Question 56

You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?

Options:

A.

Deploy the application on GKE Autopilot.

B.

Deploy the application on GKE Standard.

C.

Deploy the application on Cloud Functions.

D.

Deploy the application on Cloud Run.

Question 57

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do?

Options:

A.

Create a health check on port 443 and use that when creating the Managed Instance Group.

B.

Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.

C.

In the Instance Template, add the label ‘health-check’.

D.

In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Question 58

Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google Cloud cloud solution. What should you do?

Options:

A.

Use your existing CI/CD pipeline Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints.

B.

Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated Docker images and deploy them to Cloud Function. Use the same configuration as on-premises.

C.

Use the existing codebase and deploy each service as a separate Cloud Function Update the configurations and the required endpoints.

D.

Use your existing codebase and deploy each service as a separate Cloud Run Use the same configurations as on-premises.

Question 59

You need to verify that a Google Cloud Platform service account was created at a particular time. What should you do?

Options:

A.

Filter the Activity log to view the Configuration category. Filter the Resource type to Service Account.

B.

Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project.

C.

Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account.

D.

Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project.

Question 60

You are working for a hospital that stores Its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What should you do?

Options:

A.

Deploy a Dataflow job from the batch template "Datastore lo Cloud Storage" Schedule the batch job on the desired interval

B.

In the Cloud Console, go to Cloud Storage Upload the relevant images to the appropriate bucket

C.

Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage Schedule the script as a cron job

D.

Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub lope

Question 61

You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

Options:

A.

Create a Cloud Bigtable cluster and use the HBase API

B.

Deploy MongoDB Alias from the Google Cloud Marketplace

C.

Download a MongoDB installation package and run it on Compute Engine instances

D.

Download a MongoDB installation package, and run it on a Managed Instance Group

Question 62

You want to verify the IAM users and roles assigned within a GCP project named my-project. What should you do?

Options:

A.

Run gcloud iam roles list. Review the output section.

B.

Run gcloud iam service-accounts list. Review the output section.

C.

Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles.

D.

Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status.

Question 63

Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage buckets. You need to help the auditor access the data they need. What should you do?

Options:

A.

Assign the appropriate permissions, and then use Cloud Monitoring to review metrics

B.

Use the export logs API to provide the Admin Activity Audit Logs in the format they want

C.

Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log viewer that filters on Cloud Storage

D.

Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs

Question 64

(Your company was recently impacted by a service disruption that caused multiple Dataflow jobs to get stuck, resulting in significant downtime in downstream applications and revenue loss. You were able to resolve the issue by identifying and fixing an error you found in the code. You need to design a solution with minimal management effort to identify when jobs are stuck in the future to ensure that this issue does not occur again. What should you do?)

Options:

A.

Set up Error Reporting to identify stack traces that indicate slowdowns in Dataflow jobs. Set up alerts based on these log entries.

B.

Use the Personalized Service Health dashboard to identify issues with Dataflow jobs across regions.

C.

Update the Dataflow job configurations to send messages to a Pub/Sub topic when there are delays. Configure a backup Dataflow job to process jobs that are delayed. Use Cloud Tasks to trigger an alert when messages are pushed to the Pub/Sub topic.

D.

Set up Cloud Monitoring alerts on the data freshness metric for the Dataflow jobs to receive a notification when a certain threshold is reached.

Question 65

You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.

$ kubectlgetpods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-nqqmt 1/1 Running 0 9m41s

$ kubectldeletepod nginx-84748895c4-nqqmt

pod nginx-84748895c4-nqqmt deleted

$ kubectlgetpods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-k6bzl 1/1 Running 0 25s

What should you do to delete the deployment and avoid pod getting recreated?

Options:

A.

kubectl delete deployment nginx

B.

kubectl delete –deployment=nginx

C.

kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2

D.

kubectl delete inginx

Question 66

You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?

Options:

A.

Use kubect1 to delete the topic resource.

B.

Use gcloud CLI to delete the topic.

C.

Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.

D.

Use gcloud CLI to update the topic label managed-by-cnrm to false.

Question 67

The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you do?

Options:

A.

Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.

B.

Create an IAM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.

C.

Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group.

D.

Grant the basic role roles/editor to the DevOps group.

Question 68

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?

Options:

A.

Create a Cloud Memorystore for Redis instance with 32-GB capacity.

B.

Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.

C.

Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.

D.

Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Question 69

Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1–standard–2 nodes. You need to deploy additional pods requiring n2–highmem–16 nodes without any downtime. What should you do?

Options:

A.

Use gcloud container clusters upgrade. Deploy the new services.

B.

Create a new Node Pool and specify machine type n2–highmem–16. Deploy the new pods.

C.

Create a new cluster with n2–highmem–16 nodes. Redeploy the pods and delete the old cluster.

D.

Create a new cluster with both n1–standard–2 and n2–highmem–16 nodes. Redeploy the pods and delete the old cluster.

Question 70

You have a VM instance running in a VPC with single-stack subnets. You need to ensure that the VM instance has a fixed IP address so that other services hosted in the same VPC can communicate with the VM. You want to follow Google-recommended practices while minimizing cost. What should you do?

Options:

A.

Reserve a new static external IP address and assign the new IP address to the VM.

B.

Promote the existing IP address of the VM to become a static external IP address.

C.

Reserve a new static external IPv6 address and assign the new IP address to the VM.

D.

Promote the existing IP address of the VM to become a static internal IP address.

Question 71

Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?

Options:

A.

Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.

B.

Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.

C.

Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.

D.

Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.

Question 72

Your organization has decided to deploy all its compute workloads to Kubernetes on Google Cloud and two other cloud providers. You want to build an infrastructure-as-code solution to automate the provisioning process for all cloud resources. What should you do?

Options:

A.

Build the solution by using YAML manifests, and provision the resources.

B.

Build the solution by using Terraform, and provision the resources.

C.

Build the solution by using Python and the cloud SDKs from all providers to provision the resources.

D.

Build the solution by using Config Connector, and provision the resources.

Question 73

You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?

Options:

A.

Use the GCP Console to transfer the file instead of gsutil.

B.

Enable parallel composite uploads using gsutil on the file transfer.

C.

Decrease the TCP window size on the machine initiating the transfer.

D.

Change the storage class of the bucket from Nearline to Multi-Regional.

Question 74

Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?

Options:

A.

Migrate the workload to a Compute Engine Preemptible VM.

B.

Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes.

C.

Migrate the workload to a Compute Engine VM. Start and stop the instance as needed.

D.

Create an Instance Template with Preemptible VMs On. Create a Managed Instance Group from the template and adjust Target CPU Utilization. Migrate the workload.

Question 75

You need to create a Compute Engine instance in a new project that doesn’t exist yet. What should you do?

Options:

A.

Using the Cloud SDK, create a new project, enable the Compute Engine API in that project, and then create the instance specifying your new project.

B.

Enable the Compute Engine API in the Cloud Console, use the Cloud SDK to create the instance, and then use the ––project flag to specify a new project.

C.

Using the Cloud SDK, create the new instance, and use the ––project flag to specify the new project.Answer yes when prompted by Cloud SDK to enable the Compute Engine API.

D.

Enable the Compute Engine API in the Cloud Console. Go to the Compute Engine section of the Console to create a new instance, and look for the Create In A New Project option in the creation form.

Question 76

You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?

Options:

A.

After the VM has been created, use your Google Account credentials to log in into the VM.

B.

After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.

C.

When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.

D.

After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM.

Question 77

You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate yourapplication to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?

Options:

A.

Enable the Cloud Pub/Sub API in the API Library on the GCP Console.

B.

Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.

C.

Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed.

D.

Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub.

Question 78

You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds. The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling. What should you do?

Options:

A.

Set the maximum number of instances to 1.

B.

Decrease the maximum number of instances to 3.

C.

Use a TCP health check instead of an HTTP health check.

D.

Increase the initial delay of the HTTP health check to 200 seconds.

Question 79

You need to create a custom IAM role for use with a GCP service. All permissions in the role must be suitable for production use. You also want to clearly share with your organization the status of the custom role. This will be the first version of the custom role. What should you do?

Options:

A.

Use permissions in your role that use the ‘supported’ support level for role permissions. Set the role stage to ALPHA while testing the role permissions.

B.

Use permissions in your role that use the ‘supported’ support level for role permissions. Set the role stage to BETA while testing the role permissions.

C.

Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role stage to ALPHA while testing the role permissions.

D.

Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role stage to BETA while testing the role permissions.

Question 80

You need to create a copy of a custom Compute Engine virtual machine (VM) to facilitate an expected increase in application traffic due to a business acquisition. What should you do?

Options:

A.

Create a Compute Engine snapshot of your base VM. Create your images from that snapshot.

B.

Create a Compute Engine snapshot of your base VM. Create your instances from that snapshot.

C.

Create a custom Compute Engine image from a snapshot. Create your images from that image.

D.

Create a custom Compute Engine image from a snapshot. Create your instances from that image.

Question 81

Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company's security policy also restricts developer permissions to Compute Engine. Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do?

Options:

A.

• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.• Copy the role across all projects created within the organization with the gcloud iam roles copy command.• Assign the role to developers in those projects.

B.

• Add all developers to a Google group in Google Groups for Workspace.• Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level.

C.

• Add all developers to a Google group in Cloud Identity.• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization.

D.

• Add all developers to a Google group in Cloud Identity.• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level.• Assign the custom role to the Google group.

Question 82

You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below.

as

You check the status of the deployed pods and notice that one of them is still in PENDING status:

as

You want to find out why the pod is stuck in pending status. What should you do?

Options:

A.

Review details of the myapp-service Service object and check for error messages.

B.

Review details of the myapp-deployment Deployment object and check for error messages.

C.

Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages.

D.

View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages.

Question 83

You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want to test your application locally on your laptop with Cloud Datastore. What should you do?

Options:

A.

Export Cloud Datastore data using gcloud datastore export.

B.

Create a Cloud Datastore index using gcloud datastore indexes create.

C.

Install the google-cloud-sdk-datastore-emulator component using the apt get install command.

D.

Install the cloud-datastore-emulator component using the gcloud components install command.

Question 84

An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed any sensitive customer information after their termination. What should you do?

Options:

A.

View System Event Logs in Stackdriver. Search for the user’s email as the principal.

B.

View System Event Logs in Stackdriver. Search for the service account associated with the user.

C.

View Data Access audit logs in Stackdriver. Search for the user’s email as the principal.

D.

View the Admin Activity log in Stackdriver. Search for the service account associated with the user.

Question 85

You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis. What should you do?

Options:

A.

Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.

B.

Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources.

C.

Go to Cloud Shell and export this information to Cloud Storage on a daily basis.

D.

Go to GCP Console and export this information to Cloud SQL on a daily basis.

Question 86

You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?

Options:

A.

Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.

B.

Create a Service Account for the BI team and distribute a new private key to each member of the BI team.

C.

Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.

D.

Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.

Question 87

Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?

Options:

A.

Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.

B.

Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.

C.

Use the gcloud compute instances update command with a REFRESH action for the VM.

D.

Update and apply the instance template of the MIG.

Question 88

You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in that bucket and for all files that will be added to that bucket in the future. What should you do?

Options:

A.

Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket

B.

Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket

C.

Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket

D.

Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

Question 89

You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:

as

You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do?

Options:

A.

Store the database password inside the Docker image of the container, not in the YAML file.

B.

Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.

C.

Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.

D.

Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.

Question 90

Your company publishes large files on an Apache web server that runs on a Compute Engine instance. The Apache web server is not the only application running in the project. You want to receive an email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud Platform (GCP). What should you do?

Options:

A.

Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”

B.

Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”

C.

Export the billing data to BigQuery. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly.

D.

Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging. Create a Cloud Function that uses BigQuery to parse the HTTP response log data in Stackdriver for the current month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly.

Question 91

You have a number of compute instances belonging to an unmanaged instances group. You need to SSH to one of the Compute Engine instances to run an ad hoc script. You’ve already authenticated gcloud, however, you don’t have an SSH key deployed yet. In the fewest steps possible, what’s the easiest way to SSH to the instance?

Options:

A.

Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

B.

Use the gcloud compute ssh command.

C.

Create a key with the ssh-keygen command. Then use the gcloud compute ssh command.

D.

Create a key with the ssh-keygen command. Upload the key to the instance. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

Question 92

You need to manage multiple Google Cloud Platform (GCP) projects in the fewest steps possible. You want to configure the Google Cloud SDK command line interface (CLI) so that you can easily manage multiple GCP projects. What should you?

Options:

A.

1. Create a configuration for each project you need to manage.2. Activate the appropriate configuration when you work with each of your assigned GCP projects.

B.

1. Create a configuration for each project you need to manage.2. Use gcloud init to update the configuration values when you need to work with a non-default project

C.

1. Use the default configuration for one project you need to manage.2. Activate the appropriate configuration when you work with each of your assigned GCP projects.

D.

1. Use the default configuration for one project you need to manage.2. Use gcloud init to update the configuration values when you need to work with a non-default project.

Question 93

You are assisting a new Google Cloud user who just installed the Google Cloud SDK on their VM. The server needs access to Cloud Storage. The user wants your help to create a new storage bucket. You need to make this change in multiple environments. What should you do?

Options:

A.

Use a Deployment Manager script to automate creating storage buckets in an appropriate region

B.

Use a local SSD to improve performance of the VM for the targeted workload

C.

Use the gsutii command to create a storage bucket in the same region as the VM

D.

Use a Persistent Disk SSD in the same zone as the VM to improve performance of the VM

Page: 1 / 32
Total 321 questions