Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Amazon Web Services DBS-C01 Dumps

Page: 1 / 32
Total 324 questions

AWS Certified Database - Specialty Questions and Answers

Question 1

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

Options:

A.

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

B.

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

C.

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

D.

Change the DB clusters to the burstable instance family.

Question 2

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.

What is the MOST likely reason that the items are not being deleted?

Options:

A.

The TTL attribute's value is set as a Number data type.

B.

The TTL attribute's value is set as a Binary data type.

C.

The TTL attribute's value is a timestamp in the Unix epoch time format in seconds.

D.

The TTL attribute's value is set with an expiration of 1 year.

Question 3

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.

Which step will provide additional security?

Options:

A.

Set up NACLs that allow the entire EC2 subnet to access the DB instance

B.

Disable the master user account

C.

Set up a security group that blocks SSH to the DB instance

D.

Set up RDS to use SSL for data in transit

Question 4

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

Options:

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Question 5

A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company’s migration to AWS.

Which MySQL database option would meet these requirements?

Options:

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Question 6

A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.

The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.

Which combination of changes will meet these requirements? (Choose two.)

Options:

A.

Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.

B.

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.

C.

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.

D.

Use parallel load with different data boundaries for larger tables.

E.

Run the DMS tasks on a larger instance class. Increase local storage on the instance.

Question 7

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

Options:

A.

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Question 8

A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time- consuming, so it is not an option.

How should the Database Specialist satisfy this new requirement?

Options:

A.

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.

B.

Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.

C.

Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.

D.

Create an encrypted read replica of the RDS DB instance. Promote it the master.

Question 9

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

Options:

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Question 10

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose

*Bulk inserts through COPY commands from Amazon S3

*Analytics through SQL queries

Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

Options:

A.

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

B.

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

C.

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

D.

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

E.

Optimize analytics SQL queries to use sort keys.

F.

Avoid using temporary tables in analytics SQL queries.

Question 11

A company has an ecommerce website that runs on AWS. The website uses an Amazon RDS for MySQL database. A database specialist wants to enforce the use of temporary credentials to access the database.

Which solution will meet this requirement?

Options:

A.

Use MySQL native database authentication.

B.

Use AWS Secrets Manager to rotate the credentials.

C.

Use AWS Identity and Access Management (IAM) database authentication.

D.

Use AWS Systems Manager Parameter Store for authentication.

Question 12

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

Options:

A.

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

B.

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.

C.

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

D.

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

Question 13

A company recently acquired a new business. A database specialist must migrate an unencrypted 12 TB Amazon RDS for MySQL DB instance to a new AWS account. The database specialist needs to minimize the amount of time required to migrate the database.

Which solution meets these requirements?

Options:

A.

Create a snapshot of the source DB instance in the source account. Share the snapshot with the destination account. In the target account, create a DB instance from the snapshot.

B.

Use AWS Resource Access Manager to share the source DB instance with the destination account. Create a DB instance in the destination account using the shared resource.

C.

Create a read replica of the DB instance. Give the destination account access to the read replica. In the destination account, create a snapshot of the shared read replica and provision a new RDS for MySQL DB instance.

D.

Use mysqldump to back up the source database. Create an RDS for MySQL DB instance in the destination account. Use the mysql command to restore the backup in the destination database.

Question 14

A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.

Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

Options:

A.

Enable in-transit and at-rest encryption on the ElastiCache cluster.

B.

Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.

C.

Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.

D.

Create an IAM policy to allow the application service roles to access all ElastiCache API actions.

E.

Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.

F.

Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Question 15

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

Options:

A.

Amazon Aurora MySQL Multi-AZ DB cluster

B.

Amazon Keyspaces (for Apache Cassandra)

C.

Amazon DynamoDB table with DynamoDB auto scaling

D.

Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone

Question 16

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

Options:

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Question 17

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Question 18

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

Options:

A.

Migrate the database to a new Amazon Redshift data warehouse.

B.

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C.

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D.

Add an Aurora read replica.

Question 19

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

Options:

A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.

B.

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.

C.

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.

D.

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

Question 20

A finance company migrated its 3 ׀¢׀’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.

Which solution will meet these requirements?

Options:

A.

Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.

B.

Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

C.

Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.

D.

Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

Question 21

A database specialist needs to move an Amazon ROS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

B.

Create a DB snapshot of the DB instance. Share the snapshot With the destination AWS account Create a new DB instance by restoring the snapshot in the destination AWS account

C.

Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DB instance in the source AWS account. use the read replica to replicate the data into the DB instance in the destination AWS account

D.

Use AWS DataSync to back up the DB instance in the source AWS account Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

Question 22

A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.

Which AWS services should the Database Specialist consider? (Choose two.)

Options:

A.

Amazon DynamoDB

B.

Amazon Redshift

C.

Amazon Neptune

D.

Amazon Elasticsearch Service

E.

Amazon ElastiCache

Question 23

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

Options:

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Question 24

A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the company’s security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared across various accounts to provision testing and staging environments.

Which solution meets these requirements?

Options:

A.

Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

B.

Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

C.

Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

D.

Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

Question 25

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

Options:

A.

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B.

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

C.

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D.

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

Question 26

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.

The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.

What should a database specialist do to meet these requirements?

Options:

A.

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

B.

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

C.

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.

D.

Use on-demand capacity.

Question 27

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Question 28

A business is operating an on-premises application that is divided into three tiers: web, application, and MySQL database. The database is predominantly accessed during business hours, with occasional bursts of activity throughout the day. As part of the company's shift to AWS, a database expert wants to increase the availability and minimize the cost of the MySQL database tier.

Which MySQL database choice satisfies these criteria?

Options:

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Question 29

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

Options:

A.

Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

B.

Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

C.

Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

D.

Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

Question 30

A social media company recently launched a new feature that gives users the ability to share live feeds of their daily activities with their followers. The company has an Amazon RDS for

MySOL DB instance that stores data about follower engagement

After the new feature launched, the company noticed high CPU utilization and high database latency during reads and writes. The company wants to implement a solution that will identify the source of the high CPU utilization.

Which solution will meet these requirements with the LEAST administrative oversight?

Options:

A.

Use Amazon DevOps Guru insights

B.

Use AWS CloudTrail

C.

Use Amazon CloudWatch Logs

D.

Use Amazon Aurora Database Activity Streams

Question 31

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.

What should the database specialist do to meet these requirements?

Options:

A.

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

B.

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

C.

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

D.

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Question 32

A company uses Amazon Aurora MySQL as the primary database engine for many of its applications. A database specialist must create a dashboard to provide the company with information about user connections to databases. According to compliance requirements, the company must retain all connection logs for at least 7 years.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Enable advanced auditing on the Aurora cluster to log CONNECT events. Export audit logs from Amazon CloudWatch to Amazon S3 by using an AWS Lambda function that is invoked by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Build a dashboard by using Amazon QuickSight.

B.

Capture connection attempts to the Aurora cluster with AWS Cloud Trail by using the DescribeEvents API operation. Create a CloudTrail trail to export connection logs to Amazon S3. Build a dashboard by using Amazon QuickSight.

C.

Start a database activity stream for the Aurora cluster. Push the activity records to an Amazon Kinesis data stream. Build a dynamic dashboard by using AWS Lambda.

D.

Publish the DatabaseConnections metric for the Aurora DB instances to Amazon CloudWatch. Build a dashboard by using CloudWatch dashboards.

Question 33

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

This application has two parts:

  • An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
  • A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

A database specialist needs to design a cost-effective database solution to handle this workload. Which solution meets these requirements?

Options:

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Question 34

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

Options:

A.

Aurora will promote an Aurora Replica that is of the same size as the primary instance

B.

Aurora will promote an arbitrary Aurora Replica

C.

Aurora will promote the largest-sized Aurora Replica

D.

Aurora will not promote an Aurora Replica

Question 35

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.

Which step can be taken to ensure that the application is not interrupted?

Options:

A.

Disable weekly maintenance on the DB cluster.

B.

Clone the DB cluster and migrate it to a new copy of the database.

C.

Choose to defer the upgrade and then find an appropriate down time for patching.

D.

Set up an Aurora Replica and promote it to primary at the time of patching.

Question 36

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

Options:

A.

Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

B.

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

C.

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

D.

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Question 37

In one AWS account, a business runs a two-tier ecommerce application. An Amazon RDS for MySQL Multi-AZ database instance serves as the application's backend. A developer removed the database instance in the production environment by accident. Although the organization recovers the database, the incident results in hours of outage and financial loss.

Which combination of adjustments would reduce the likelihood that this error will occur again in the future? (Select three.)

Options:

A.

Grant least privilege to groups, IAM users, and roles.

B.

Allow all users to restore a database from a backup.

C.

Enable deletion protection on existing production DB instances.

D.

Use an ACL policy to restrict users from DB instance deletion.

E.

Enable AWS CloudTrail logging and Enhanced Monitoring.

Question 38

A database specialist needs to replace the encryption key for an Amazon RDS DB instance. The database specialist needs to take immediate action to ensure security of the database.

Which solution will meet these requirements?

Options:

A.

Modify the DB instance to update the encryption key. Perform this update immediately without waiting for the next scheduled maintenance window.

B.

Export the database to an Amazon S3 bucket. Import the data to an existing DB instance by using the export file. Specify a new encryption key during the import process.

C.

Create a manual snapshot of the DB instance. Create an encrypted copy of the snapshot by using a new encryption key. Create a new DB instance from the encrypted snapshot.

D.

Create a manual snapshot of the DB instance. Restore the snapshot to a new DB instance. Specify a new encryption key during the restoration process.

Question 39

On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.

Which technique will satisfy these criteria with the LEAST amount of operational overhead?

Options:

A.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.

B.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.

C.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.

D.

Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

Question 40

An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The

steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.

How should a Database Specialist address these requirements?

Options:

A.

Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB

B.

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift

C.

Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance

D.

Use DynamoDB Accelerator to offload the reads

Question 41

Recently, an ecommerce business transferred one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition database instance. The corporation anticipates an increase in read traffic as a result of an approaching sale. To accommodate the projected read load, a database professional must establish a read replica of the database instance.

Which procedures should the database professional do prior to establishing the read replica? (Select two.)

Options:

A.

Identify a potential downtime window and stop the application calls to the source DB instance.

B.

Ensure that automatic backups are enabled for the source DB instance.

C.

Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.

D.

Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).

E.

Modify the read replica parameter group setting and set the value to 1.

Question 42

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.

Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

Options:

A.

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

B.

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

C.

Create additional readers to cater to the different scenarios.

D.

Use custom endpoints to satisfy the different workloads.

Question 43

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production

resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

Options:

A.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

B.

Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

C.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_

D.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports

Question 44

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Question 45

A company has an application that uses an Amazon DynamoDB table to store user data. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time.

A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table’s provisioned read capacity units (RCUs) are being used.

What should the database specialist do?

Options:

A.

Enable auto scaling for the DynamoDB table.

B.

Use four threads and parallel DynamoDB API Scan operations.

C.

Double the table’s provisioned RCUs.

D.

Set the Limit and Offset parameters before every call to the API.

Question 46

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Question 47

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.

Which solution will meet these requirements in the MOST operationally efficient manner?

Options:

A.

Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data

B.

Use Amazon Redshift for relational data and JSON data.

C.

Use Amazon RDS for relational data. Use Amazon Neptune for JSON data

D.

Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.

Question 48

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

Options:

A.

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B.

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C.

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D.

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Question 49

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Question 50

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games’ geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.

Which solution meets these requirements?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Question 51

A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.

What should the Database Specialist do to meet these requirements?

Options:

A.

Restore a snapshot from the production cluster into test clusters

B.

Create logical dumps of the production cluster and restore them into new test clusters

C.

Use database cloning to create clones of the production cluster

D.

Add an additional read replica to the production cluster and use that node for testing

Question 52

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

Options:

A.

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.

B.

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

C.

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

D.

Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Question 53

The Amazon CloudWatch metric for FreeLocalStorage on an Amazon Aurora MySQL DB instance shows that the amount of local storage is below 10 MB. A database engineer must increase the local storage available in the Aurora DB instance.

How should the database engineer meet this requirement?

Options:

A.

Modify the DB instance to use an instance class that provides more local SSD storage.

B.

Modify the Aurora DB cluster to enable automatic volume resizing.

C.

Increase the local storage by upgrading the database engine version.

D.

Modify the DB instance and configure the required storage volume in the configuration section.

Question 54

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

Options:

A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

Question 55

A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:

  • The networks and routes affected if a particular component fails.
  • The networks that have redundant routes between them.
  • The networks that do not have redundant routes between them.
  • The fastest path between two networks.

Which database engine meets these requirements?

Options:

A.

Amazon Aurora MySQL

B.

Amazon Neptune

C.

Amazon ElastiCache for Redis

D.

Amazon DynamoDB

Question 56

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub- second levels?

Options:

A.

Increase the size of the DB instance storage

B.

Change the underlying EBS storage type to General Purpose SSD (gp2)

C.

Disable EBS optimization on the DB instance

D.

Change the DB instance to an instance class with a higher maximum bandwidth

Question 57

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.

Which solution meets these requirements?

Options:

A.

Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.

B.

Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

C.

Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

D.

Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

Question 58

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.

Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

Options:

A.

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.

B.

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.

C.

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

D.

Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Question 59

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Question 60

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?

Options:

A.

Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0

B.

Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1

C.

Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture

D.

Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan

Question 61

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

Options:

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Question 62

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.

What is the most likely reason for this?

Options:

A.

The source DB instance has to be converted to Single-AZ first to create a read replica from it.

B.

Enhanced Monitoring is not enabled on the source DB instance.

C.

The minor MySQL version in the source DB instance does not support read replicas.

D.

Automated backups are not enabled on the source DB instance.

Question 63

A company’s database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.

What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

Options:

A.

Change the restored cluster’s parameter group to the original cluster’s custom parameter group.

B.

Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.

C.

Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.

D.

Run the syncInstances command in AWS DataSync.

Question 64

An internet advertising firm stores its data in an Amazon DynamoDb table. Amazon DynamoDB Streams are enabled on the table, and one of the keys has a global secondary index. The table is encrypted using a customer-managed AWS Key Management Service (AWS KMS) key.

The firm has chosen to grow worldwide and want to duplicate the database using DynamoDB global tables in a new AWS Region.

An administrator observes the following upon review:

  • No role with the dynamodb: CreateGlobalTable permission exists in the account.
  • An empty table with the same name exists in the new Region where replication is desired.
  • A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

Which settings will prevent you from creating a global table or replica in the new Region? (Select two.)

Options:

A.

A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

B.

An empty table with the same name exists in the Region where replication is desired.

C.

No role with the dynamodb:CreateGlobalTable permission exists in the account.

D.

DynamoDB Streams is enabled for the table.

E.

The table is encrypted using a KMS customer managed key.

Question 65

A ride-hailing application stores bookings in a persistent Amazon RDS for MySQL DB instance. This program is very popular, and the corporation anticipates a tenfold rise in the application's user base over the next several months. The application receives a higher volume of traffic in the morning and evening.

This application is divided into two sections:

✑ An internal booking component that takes online reservations in response to concurrent user queries.

✑ A component of a third-party customer relationship management (CRM) system that customer service professionals utilize. Booking data is accessed using queries in the CRM.

To manage this workload effectively, a database professional must create a cost-effective database system.

Which solution satisfies these criteria?

Options:

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Question 66

A company has more than 100 AWS accounts that need Amazon RDS instances. The company wants to build an automated solution to deploy the RDS instances with specific compliance parameters. The data does not need to be replicated. The company needs to create the databases within 1 day

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create RDS resources by using AWS CloudFormation. Share the CloudFormation template with each account.

B.

Create an RDS snapshot. Share the snapshot With each account Deploy the snapshot into each account

C.

use AWS CloudFormation to create RDS instances in each account. Run AWS Database Migration Service (AWS DMS) replication to each ot the created instances.

D.

Create a script by using the AWS CLI to copy the ROS instance into the other accounts from a template account.

Question 67

A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours. Which solution will meet these requirements and is the MOST operationally efficient?

Options:

A.

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company’s Amazon S3 bucket.

B.

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.

C.

Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.

D.

Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.

Question 68

A corporation is transitioning from an IBM Informix database to an Amazon RDS for SQL Server Multi-AZ implementation with Always On Availability Groups (AGs). SQL Server Agent tasks are scheduled to execute at 5-minute intervals on the Always On AG listener to synchronize data between the Informix and SQL Server databases. After a successful failover to the backup node with minimum delay, users endure hours of stale data.

How can a database professional guarantee that consumers view the most current data after a failover?

Options:

A.

Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.

B.

Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.

C.

Set the databases on the secondary node to read-only mode.

D.

Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.

Question 69

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.

Which solution meets these requirements in the MOST efficient way?

Options:

A.

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache

B.

Use Amazon DynamoDB as the database and use DynamoDB Accelerator

C.

Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache

D.

Use Amazon DynamoDB as the database and use Amazon API Gateway

Question 70

A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.

A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.

Which solution will satisfy these criteria?

Options:

A.

Defer the maintenance update until the sales event is over.

B.

Create a read replica with the latest update. Initiate a failover before the sales event.

C.

Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.

D.

Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Question 71

A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company needs to implement a scalable solution that is more resilient to database failures as quickly as possible.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Migrate the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot. Change the endpoint in the Lambda functions to use the new database.

B.

Migrate the Aurora MySQL database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS). Change the endpoint in the Lambda functions to use the new database.

C.

Create an Amazon EventBridge rule that invokes a Lambda function. Code the function to iterate over all existing connections and to call MySQL queries to end any connections in the sleep state.

D.

Increase the instance class for the Aurora database with more memory. Set a larger value for the max_connections parameter.

Question 72

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

Options:

A.

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

B.

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

C.

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

D.

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

Question 73

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis.

Which solution meets these requirements?

Options:

A.

Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.

B.

Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.

C.

Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.

D.

Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.

Question 74

A company has a database fleet that includes an Amazon RDS for MySQL DB instance. During an audit, the company discovered that the data that is stored on the DB instance is unencrypted.

A database specialist must enable encryption for the DB instance. The database specialist also must encrypt all connections to the DB instance.

Which combination of actions should the database specialist take to meet these requirements? (Choose three.)

Options:

A.

In the RDS console, choose ג€Enable encryptionג€ to encrypt the DB instance by using an AWS Key Management Service (AWS KMS) key.

B.

Encrypt the read replica of the unencrypted DB instance by using an AWS Key Management Service (AWS KMS) key. Fail over the read replica to the primary DB instance.

C.

Create a snapshot of the unencrypted DB instance. Encrypt the snapshot by using an AWS Key Management Service (AWS KMS) key. Restore the DB instance from the encrypted snapshot. Delete the original DB instance.

D.

Require SSL connections for applicable database user accounts.

E.

Use SSL/TLS from the application to encrypt a connection to the DB instance.

F.

Enable SSH encryption on the DB instance.

Question 75

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

Options:

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Question 76

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Options:

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Question 77

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

Options:

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Question 78

A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office.

The dashboard loads more slowly in the Singapore branch office than in the United States branch office. The company needs a solution so that the dashboard loads consistently for users in each location.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Take a snapshot of the DB instance in us-west-2. Create a new DB instance in ap-southeast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

B.

Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica.

C.

Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in ap-southeast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

D.

Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1.

Question 79

A company is running critical applications on AWS. Most of the application deployments use Amazon Aurora MySQL for the database stack. The company uses AWS CloudFormation to deploy the DB instances.

The company's application team recently implemented a CI/CD pipeline. A database engineer needs to integrate the database deployment CloudFormation stack with the newly built CllCD platform. Updates to the CloudFormation stack must not update existing production database resources.

Which CloudFormation stack policy action should the database engineer implement to meet these requirements?

Options:

A.

Use a Deny statement for the Update:Modify action on the production database resources.

B.

Use a Deny statement for the action on the production database resources.

C.

Use a Deny statement for the Update:Delete action on the production database resources.

D.

Use a Deny statement for the Update:Replace action on the production database resources.

Question 80

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Question 81

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

Options:

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Question 82

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Question 83

A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.

Which solution meets this requirement with the LEAST amount of effort?

Options:

A.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.

B.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.

C.

Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.

D.

Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.

Question 84

A large financial services company uses Amazon ElastiCache for Redis for its new application that has a global user base. A database administrator must develop a caching solution that will be available

across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers.

Which solution meets these requirements with the LEAST amount of operational effort?

Options:

A.

Enable cluster mode in ElastiCache for Redis. Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.

B.

Create a global datastore in ElastiCache for Redis. Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.

C.

Disable cluster mode in ElastiCache for Redis. Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.

D.

Create a snapshot of ElastiCache for Redis in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.

Question 85

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ׀¢׀’ of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than

200 and that the disk I/O response time is significantly higher than usual.

What should the database specialist do to improve the performance of the application immediately?

Options:

A.

Increase the Provisioned IOPS rate on the storage.

B.

Increase the available storage space.

C.

Use General Purpose SSD (gp2) storage with burst credits.

D.

Create a read replica to offload Read IOPS from the DB instance.

Question 86

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.

Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

Options:

A.

Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.

B.

Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.

C.

On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.

D.

On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.

E.

Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.

F.

On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.

Question 87

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Question 88

A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.

Which combination of actions should the Database Specialist take? (Choose three.)

Options:

A.

Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.

B.

Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.

C.

Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.

D.

Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.

E.

Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.

F.

Configure the AWS Managed Microsoft AD domain controller Security Group.

Question 89

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

Options:

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Question 90

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?

Options:

A.

Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.

B.

Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.

C.

Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.

D.

Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

Question 91

In North America, a business launched a mobile game that swiftly expanded to 10 million daily active players. The game's backend is hosted on AWS and makes considerable use of a TTL-configured Amazon DynamoDB table.

When an item is added or changed, its TTL is set to 600 seconds plus the current epoch time. The game logic is reliant on the purging of outdated data in order to compute rewards points properly. At times, items from the table are read that are many hours beyond their TTL expiration.

How should a database administrator resolve this issue?

Options:

A.

Use a client library that supports the TTL functionality for DynamoDB.

B.

Include a query filter expression to ignore items with an expired TTL.

C.

Set the ConsistentRead parameter to true when querying the table.

D.

Create a local secondary index on the TTL attribute.

Question 92

A database specialist needs to move an Amazon RDS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

B.

Create a DB snapshot of the DB instance. Share the snapshot with the destination AWS account. Create a new DB instance by restoring the snapshot in the destination AWS account.

C.

Create a Multi-AZ deployment for the DB instance. Create a read replica for the DB instance in the source AWS account. Use the read replica to replicate the data into the DB instance in the destination AWS account.

D.

Use AWS DataSync to back up the DB instance in the source AWS account. Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

Question 93

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.

Which solution will enable this change?

Options:

A.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.

B.

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

C.

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.

D.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

Question 94

A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average.

What is the MOST cost-effective solution?

Options:

A.

Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.

B.

Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.

C.

Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and use Amazon Athena to query the data.

D.

Use Amazon DynamoDB with on-demand capacity mode. Switch the table class to DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).

Question 95

Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.

Which strategy will resolve the issue?

Options:

A.

Configure all replica tables to use DynamoDB auto scaling.

B.

Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.

C.

Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.

D.

Configure the table-level write throughput limit service quota to a higher value.

Question 96

A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Aurora. The dashboard should be able to read data within 100 milliseconds after an update.

The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.

Which solution satisfies these criteria?

Options:

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Page: 1 / 32
Total 324 questions