Weekend Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Google Professional-Cloud-Developer Dumps

Google Certified Professional - Cloud Developer Questions and Answers

Question 1

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Question 2

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Question 3

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Question 4

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Question 5

Which service should HipLocal use for their public APIs?

Options:

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Question 6

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Question 7

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

Options:

A.

Block all traffic on port 443.

B.

Allow all traffic into the network.

C.

Allow traffic on port 443 for a specific tag.

D.

Allow all traffic on port 443 into the network.

Question 8

Which database should HipLocal use for storing user activity?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Question 9

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Question 10

Which service should HipLocal use to enable access to internal apps?

Options:

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Question 11

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Question 12

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Question 13

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

Options:

A.

Create an API key. Use the API key to interact with Google Cloud.

B.

Use the default compute service account to interact with Google Cloud.

C.

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Question 14

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Question 15

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Question 16

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Question 17

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Question 18

In order to meet their business requirements, how should HipLocal store their application state?

Options:

A.

Use local SSDs to store state.

B.

Put a memcache layer in front of MySQL.

C.

Move the state storage to Cloud Spanner.

D.

Replace the MySQL instance with Cloud SQL.

Question 19

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Question 20

You are developing a microservice-based application that will run on Google Kubernetes Engine (GKE). Some of the services need to access different Google Cloud APIs. How should you set up authentication of these services in the cluster following Google-recommended best practices? (Choose two.)

Options:

A.

Use the service account attached to the GKE node.

B.

Enable Workload Identity in the cluster via the gcloud command-line tool.

C.

Access the Google service account keys from a secret management service.

D.

Store the Google service account keys in a central secret management service.

E.

Use gcloud to bind the Kubernetes service account and the Google service account using roles/iam.workloadIdentity.

Question 21

You are writing from a Go application to a Cloud Spanner database. You want to optimize your application’s performance using Google-recommended best practices. What should you do?

Options:

A.

Write to Cloud Spanner using Cloud Client Libraries.

B.

Write to Cloud Spanner using Google API Client Libraries

C.

Write to Cloud Spanner using a custom gRPC client library.

D.

Write to Cloud Spanner using a third-party HTTP client library.

Question 22

One of your deployed applications in Google Kubernetes Engine (GKE) is having intermittent performance issues. Your team uses a third-party logging solution. You want to install this solution on each node in your GKE cluster so you can view the logs. What should you do?

Options:

A.

Deploy the third-party solution as a DaemonSet

B.

Modify your container image to include the monitoring software

C.

Use SSH to connect to the GKE node, and install the software manually

D.

Deploy the third-party solution using Terraform and deploy the logging Pod as a Kubernetes Deployment

Question 23

You want to create “fully baked” or “golden” Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

Options:

A.

Embed the appropriate database connection string in the image. Create a different image for each environment.

B.

When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.

C.

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, read the “DATABASE” environment variable, and use the value to connect to the appropriate database.

D.

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, query the metadata server for the “DATABASE” value, and use the value to connect to the appropriate database.

Question 24

Your team manages a Google Kubernetes Engine (GKE) cluster where an application is running. A different team is planning to integrate with this application. Before they start the integration, you need to ensure that the other team cannot make changes to your application, but they can deploy the integration on GKE. What should you do?

Options:

A.

Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team.

B.

Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.

C.

Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.

D.

Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new namespace to the other team.

Question 25

You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture:

Multiple Compute Engine machines, each running an instance of the authentication service

Multiple Compute Engine machines, each running an instance of the audit service

Pub/Sub to send the events from the authentication services.

How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently?

Options:

A.

Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages.

B.

Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages.

C.

Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services.

D.

Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service.

E.

Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service.

Question 26

You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL Due to regulatory requirements: your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices'?

Options:

A.

Configure your Cloud Run service with a Cloud SQL connection.

B.

Configure your Cloud Run service to use a Serverless VPC Access connector

C.

Configure your application to use the Cloud SQL Java connector

D.

Configure your application to connect to an instance of the Cloud SQL Auth proxy

Question 27

You have an application deployed in production. When a new version is deployed, some issues don't arise until the application receives traffic from users in production. You want to reduce both the impact and the number of users affected.

Which deployment strategy should you use?

Options:

A.

Blue/green deployment

B.

Canary deployment

C.

Rolling deployment

D.

Recreate deployment

Question 28

You are building a CI/CD pipeline that consists of a version control system, Cloud Build, and Container Registry. Each time a new tag is pushed to the repository, a Cloud Build job is triggered, which runs unit tests on the new code builds a new Docker container image, and pushes it into Container Registry. The last step of your pipeline should deploy the new container to your production Google Kubernetes Engine (GKE) cluster. You need to select a tool and deployment strategy that meets the following requirements:

• Zero downtime is incurred

• Testing is fully automated

• Allows for testing before being rolled out to users

• Can quickly rollback if needed

What should you do?

Options:

A.

Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is successful, deploy the container to production.

B.

Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is successful, deploy the container to production.

C.

Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a canary test.

D.

Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a shadow test.

Question 29

You are developing an application that reads credit card data from a Pub/Sub subscription. You have written code and completed unit testing. You need to test the Pub/Sub integration before deploying to Google Cloud. What should you do?

Options:

A.

Create a service to publish messages, and deploy the Pub/Sub emulator. Generate random content in the publishing service, and publish to the emulator.

B.

Create a service to publish messages to your application. Collect the messages from Pub/Sub in production, and replay them through the publishing service.

C.

Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the messages from Pub/Sub in production, and publish them to the emulator.

D.

Create a service to publish messages, and deploy the Pub/Sub emulator. Publish a standard set of testing messages from the publishing service to the emulator.

Question 30

You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the application following Google-recommended best practices?

Options:

A.

Use an Ingress that uses the API's URL to route requests to the appropriate backend.

B.

Leverage a Service Discovery system, and connect to the backend specified by the request.

C.

Use multiple clusters, and use DNS entries to route requests to separate versioned backends.

D.

Combine multiple versions in the same service, and then specify the API version in the POST request.

Question 31

Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template?

Options:

A.

Set the Max Surge to 100%.

B.

Set the Update mode to Opportunistic.

C.

Set the Maximum Unavailable to 100%.

D.

Set the Minimum Wait time to 0 seconds.

Question 32

You need to copy directory local-scripts and all of its contents from your local workstation to a Compute

Engine virtual machine instance.

Which command should you use?

Options:

A.

gsutil cp --project “my-gcp-project” -r ~/local-scripts/ gcp-instance-name:~/

server-scripts/ --zone “us-east1-b”

B.

gsutil cp --project “my-gcp-project” -R ~/local-scripts/ gcp-instance-name:~/

server-scripts/ --zone “us-east1-b”

C.

gcloud compute scp --project “my-gcp-project” --recurse ~/local-scripts/ gcpinstance-

name:~/server-scripts/ --zone “us-east1-b”

D.

gcloud compute mv --project “my-gcp-project” --recurse ~/local-scripts/ gcpinstance-

name:~/server-scripts/ --zone “us-east1-b”

Question 33

You are creating an App Engine application that writes a file to any user's Google Drive.

How should the application authenticate to the Google Drive API?

Options:

A.

With an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to

obtain an access token for each user.

B.

With an OAuth Client ID with delegated domain-wide authority.

C.

With the App Engine service account and https://www.googleapis.com/auth/drive.file scope

that generates a signed JWT.

D.

With the App Engine service account with delegated domain-wide authority.

Question 34

Your company needs a database solution that stores customer purchase history and meets the following requirements:

Customers can query their purchase immediately after submission.

Purchases can be sorted on a variety of fields.

Distinct record formats can be stored at the same time.

Which storage option satisfies these requirements?

Options:

A.

Firestore in Native mode

B.

Cloud Storage using an object read

C.

Cloud SQL using a SQL SELECT statement

D.

Firestore in Datastore mode using a global query

Question 35

You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How should you store the file so that all the builders in the pipeline can access it?

Options:

A.

Store and retrieve the file contents using Compute Engine instance metadata.

B.

Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step.

C.

Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step.

D.

Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl from the subsequent build step to read the value.

Question 36

You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?

Options:

A.

Configure the appropriate service accounts, and use Workload Identity to run the pods.

B.

Store the application credentials as Kubernetes Secrets, and expose them as environment variables.

C.

Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.

D.

Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.

Question 37

You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices for availability. What should you do?

Options:

A.

Package each component in a separate container. Implement readiness and liveness probes.

B.

Package the application in a single container. Use a process management tool to manage each component.

C.

Package each component in a separate container. Use a script to orchestrate the launch of the components.

D.

Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job.

Question 38

You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.)

Options:

A.

Set a retention policy on the bucket with a period of 7 years.

B.

Use IAM Conditions to provide access to objects 7 years after the object creation date.

C.

Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.

D.

Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.

E.

Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.

Page: 1 / 25
Total 254 questions