[February-2023 Update] Latest DBS-C01 exam questions from Lead4Pass DBS-C01 dumps

Lead4Pass shares the latest valid DBS-C01 dumps that meet the requirements for passing the AWS Certified Database – Specialty (DBS-C01) certification exam!
Lead4Pass DBS-C01 dumps provide two learning solutions, PDF and VCE, to help candidates experience real simulated exam scenarios! Now! Get the latest Lead4Pass DBS-C01 dumps with PDF and VCE:
https://www.leads4pass.com/aws-certified-database-specialty.html (321 Q&A)

FromExam nameFree shareLast updated
Lead4PassAWS Certified Database – Specialty (DBS-C01)Q14-Q28DBS-C01 dumps (Q1-Q13)

New Q14:

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for the PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

A. Update the log_connections parameter in the default parameter group

B. Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the Postgresql.conf file

Correct Answer: AE

Reference: https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/

New Q15:

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.

Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

A. Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux re-platforming assistant for Microsoft SQL Server Databases.

B. Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.

C. On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.

D. On the AWS Management Console set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.

E. Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.

F. On the AWS Management Console set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.

Correct Answer: ACE

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/replatform-sql-server.html https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Leverage_automation_to_re-platform_SQL_Server_to_Linux_WIN322-R1.pdf

New Q16:

A company wants to improve its e-commerce website on AWS. A database specialist decided to add Amazon ElastiCache for Redis in the implementation stack to ease the workload of the database and shorten the website response times.

The database specialist must also ensure the e-commerce website is highly available within the company\’s AWS Region.

How should the database specialist deploy ElastiCache to meet this requirement?

A. Launch an ElastiCache for the Redis cluster using the AWS CLI with the cluster-enabled switch.

B. Launch an ElastiCache for the Redis cluster and select read replicas in different Availability Zones.

C. Launch two ElastiCache for Redis clusters in two different Availability Zones. Configure Redis streams to replicate the cache from the primary cluster to another.

D. Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster\’s snapshot to a different Availability Zone during disaster recovery.

Correct Answer: B

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html

You can enable Multi-AZ only on Redis (cluster mode disabled) clusters that have at least one available read replica. Clusters without read replicas do not provide high availability or fault tolerance.

New Q17:

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on-premises to an Amazon Aurora PostgreSQL database on AWS.

The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to

complete the migration.

What is the MOST operationally efficient solution that meets these requirements?

A. Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.

B. Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.

C. Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.

D. Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.

Correct Answer: B

https://aws.amazon.com/premiumsupport/knowledge-center/dms-mapping-oracle-postgresql/

New Q18:

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

A. Use pg_audit to generate audit logs and send the logs to the Security team.

B. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

C. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

D. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Correct Answer: C

https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresql-compatibility-supports-database-activity-streams/ “Database Activity Streams for Amazon Aurora with PostgreSQL compatibility provides a near real-time data stream of the database activity in your relational database to help you monitor activity. When integrated with third-party database activity monitoring tools, Database Activity Streams can monitor and audit database activity to provide safeguards for your database and help meet compliance and regulatory requirements.” https://docs.aws.amazon.com/AmazonRDS/latest/ AuroraUserGuide/Overview.LoggingAnd Monitoring.html

New Q19:

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

A. Increase the size of the DB instance storage

B. Change the underlying EBS storage type to General Purpose SSD (gp2)

C. Disable EBS optimization on the DB instance

D. Change the DB instance to an instance class with a higher maximum bandwidth

Correct Answer: D

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html

New Q20:

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the US-east-1 region. The target RDS for the PostgreSQL DB instance is in a VPC in the use-west-2 Region.

Where should the AWS DMS replication instance be placed for the MOST optimal performance?

A. In the same Region and VPC of the source DB instance

B. In the same Region and VPC as the target DB instance

C. In the same VPC and Availability Zone as the target DB instance

D. In the same VPC and Availability Zone as the source DB instance

Correct Answer: C

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html# CHAP_ReplicationInstance.VPC.Configurations.ScenarioVPCPeer In fact, all the configurations listed on the above URL prefer the replication instance putting into target vpc region/subnet/az. https://docs.aws.amazon.com/dms/latest/sbs/CHAP_SQLServer2Aurora.Steps.CreateRepli cationInstance.html

New Q21:

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

A. Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to the Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B. Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C. Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to the Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D. Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Correct Answer: C

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.Ov erview.html

New Q22:

A corporation is transitioning from an IBM Informix database to an Amazon RDS for SQL Server Multi-AZ implementation with Always On Availability Groups (AGs). SQL Server Agent tasks are scheduled to execute at 5-minute intervals on the Always On AG listener to synchronize data between the Informix and SQL Server databases. After a successful failover to the backup node with minimum delay, users endure hours of stale data.

How can a database professional guarantee that consumers view the most current data after a failover?

A. Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.

B. Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.

C. Set the databases on the secondary node to read-only mode.

D. Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.

Correct Answer: D

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerMultiAZ.html If you have SQL Server Agent jobs, recreate them on the secondary. You do so because these jobs are stored in the msdb database, and you can’t replicate this database by using Database Mirroring (DBM) or Always On Availability Groups (AGs). Create the jobs first in the original primary, then failover, and create the same jobs in the new primary.

New Q23:

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.

Which combination of steps should the database specialist take to rename the database? (Choose two.)

A. Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

B. Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

C. Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

D. Update the application with the new database connection string.

E. Update the DNS record for the DB instance.

Correct Answer: BD

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.RenamingDB.html

New Q24:

A company is running an Amazon RDS for a PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.

What is the FASTEST way to accomplish this?

A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.

B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for the PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.

C. Create a database snapshot of the RDS for the PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.

D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

Correct Answer: D

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Mi grating.html Migrating data from an RDS PostgreSQL DB instance to an Aurora PostgreSQL DB cluster by using an Aurora read replica. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html#AuroraPostgreSQL.Migrating.RDSPostgreSQL.Replica

New Q25:

A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.

Which strategy would allow for a successful migration with the LEAST amount of downtime?

A. Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.

B. Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.

C. Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.

D. Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate. Point the application to the DB instance.

Correct Answer: B

New Q26:

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?

A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.

B. Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.

C. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.

D. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

Correct Answer: D

Reference:https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Schema-Conversion-Tool.pdf

Converts DB/DW schema from source to target (including procedures/views / secondary indexes / FK and constraints) Mainly for heterogeneous DB migrations and DW migrations

New Q27:

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared

location for automatic deployment and is exposed to all users who can access the location.

A database specialist must use encryption to ensure that the credentials are not visible in the source code.

Which solution will meet these requirements?

A. Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

B. Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to the Systems Manager.

C. Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to the Systems Manager.

D. Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

Correct Answer: C

only creds in system manager secure parameter.

New Q28:

A worldwide gaming company\’s development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. The maximum number of concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

1.

Partition key: game name

2.

Sort key: event identifier

3.

Local secondary index: player identifier

4.

Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

A. Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B. Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C. Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D. Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Correct Answer: D


Download the latest Lead4Pass DBS-C01 dumps with PDF and VCE: https://www.leads4pass.com/aws-certified-database-specialty.html (321 Q&A)

Read DBS-C01 exam questions(Q1-Q13): https://awsexamdumps.com/dbs-c01-dumps-update-aws-certified-database-specialty-exam-materials/

AwsExamDumps is the largest community of Amazon free dumps, and it has the latest and most complete Amazon (AWS Certified Associate, AWS Certified Foundational, AWS Certified Professional, AWS Certified Specialty) dump community. You can take online practice tests, and the latest version of the exam dump is recommended. Helping you pass the exam with ease.
Back To Top