A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The
migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate
that the data was migrated accurately from the source to the target before the cutover. The migration must have
minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?
A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the
target Aurora DB cluster. Verify the datatype of the columns.
B.
Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the
tables being migrated and to verify that the data definition language (DDL) statements are completed.
C.
Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the
premigrationchecklist to make sure there are no issues with the conversion.
D.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and
targetrecords, and reports any mismatches.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and
targetrecords, and reports any mismatches.
A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL
Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database
logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?
A.
Log in to the host and run the rm $PGDATA/pg_logs/* command
B.
Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to bedeleted
C.
Create a ticket with AWS Support to have the logs deleted
D.
Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to bedeleted
A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
A.
Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary
Aurora DB cluster is unreachable.
C.
Edit and enable Aurora DB cluster cache management in parameter groups.
D.
Set TCP keepalive parameters to a high value.
E.
Set JDBC connection string timeout variables to a low value.
F.
Set Java DNS caching timeouts to a high value.
Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary
Aurora DB cluster is unreachable.
Edit and enable Aurora DB cluster cache management in parameter groups.
A company is about to launch a new product, and test databases must be re-created from production data.
The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist
needs to deploy a solution to create these test databases as quickly as possible with the least amount of
administrative effort.
What should the Database Specialist do to meet these requirements?
A.
Restore a snapshot from the production cluster into test clusters
B.
Create logical dumps of the production cluster and restore them into new test clusters
C.
Use database cloning to create clones of the production cluster
D.
Add an additional read replica to the production cluster and use that node for testing
Add an additional read replica to the production cluster and use that node for testing
The Development team recently executed a database script containing several data definition language (DDL)
and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release
accidentally deleted thousands of rows from an important table and broke some application functionality. This
was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a
DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator
also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to
the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
A.
Quickly rewind the DB cluster to a point in time before the release using Backtrack.
B.
Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
C.
Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster
D.
Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database
Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database
Page 13 out of 40 Pages |
Previous |