AWS Certification: S3 Questions

Amazon Simple Storage Service (Amazon S3)

Overview
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.
1. A company currently storing a set of documents in the AWS Simple Storage Service, is worried about the potential loss if these documents are ever deleted. Which of the following can be used to ensure protection from loss of the underlying documents in S3?

A. Enable Versioning for the underlying S3 bucket.

B. Copythe bucket data to an EBS Volume as a backup.

C. Createa Snapshot of the S3 bucket.

D. Enablean IAM Policy which does not allow deletion of any document from the S3 bucket.

Answer

A. Enable Versioning for the underlying S3 bucket.

Versioning as shown below. Versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on S3 Versioning, please refer to the below URL:https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html


2. A company has a requirement for archival of 6TB of data. There is an agreement with the stakeholders for an 8-hour agreed retrieval time. Which of the following can be used as the MOST cost-effective storage option?

A. AWS S3 Standard

B. AWS S3 Infrequent Access

C. AWS Glacier

D. AWS EBS Volumes

Answer

C. AWS Glacier

Amazon Glacier is the perfect solution for this. Since the agreed time frame for retrieval is met at 8 hours, this will be the most cost effective option.

For more information on AWS Glacier, please visit the following URL:

https://aws.amazon.com/documentation/glacier/


3. A company has a sales team and each member of this team uploads their sales figures daily. A Solutions Architect needs a durable storage solution for these documents and also a way to prevent users from accidentally deleting important documents. What among the following choices would deliver protection against unintended user actions?

A. Store data in an EBS Volume and create snapshots once a week.

B. Store data in an S3 bucket and enable versioning.

C. Store data in two S3 buckets in different AWS regions.

D. Store data on EC2 Instance storage.

Answer

A. Prefix each object name with a random string.

For more information on Amazon S3, please visit the following URL:

https://aws.amazon.com/s3/


4. A company has an application that delivers objects from S3 to users. Of late, some users spread across the globe have been complaining of slow response times. Which of the following additional steps would help in building a cost-effective solution and also help ensure that the users get an optimal response to objects from S3?

A. Use S3 Replication to replicate the objects to regions closest to the users.

B. Ensure S3 Transfer Acceleration is enabled to ensure all users get the desiredresponse times.

C. Place an ELB in front of S3 to distribute the load across S3.

D. Placethe S3 bucket behind a CloudFront distribution.

Answer

D. Placethe S3 bucket behind a CloudFront distribution.

If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization.

Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon S3, which will reduce your costs.

For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3 and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3.

For more information on performance considerations in S3, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html Options A and B are incorrect. S3 Cross-Region Replication and Transfer Acceleration incurs cost.

Option C is incorrect. ELB is used to distribute traffic on to EC2 Instances.


5. A company has an application that stores images and thumbnails for images on S3. While the thumbnail images need to be available for download immediately, the images and thumbnails themselves are not accessed that frequently.

Which is the most cost-efficient storage option to store images that meet these requirements?


A. Amazon Glacier with Expedited Retrievals.

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

Answer

B. Amazon S3 Standard Infrequent Access

Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed. It is more cost effective than Option D (Amazon S3 Standard). If you choose Amazon Glacier with Expedited Retrievals, you defeat the whole purpose of the requirement, because of its increased cost.

For more information on AWS Storage Classes, please visit the following URL:

https://aws.amazon.com/s3/storage-classes/


6. A company has an application that uses the S3 bucket as its data layer. As per the monitoring on the S3 bucket, it can be seen that the number of GET requests is 400 requests per second. The IT Operations team receives service requests about users getting HTTP 500 or 503 errors while accessing the application. What can be done to resolve these errors? Choose 2 answers from the options given below.

A. Add a CloudFront distribution in front of the bucket.

B. Add randomness to the key names.

C. Add an ELB in front of the S3 bucket.

D. Enable Versioning for the S3 bucket.

Answer

A. & B.

When your workload is sending mostly GET requests, you can add randomness to key names. In addition, you can integrate Amazon CloudFront with Amazon S3 to distribute content to your users with low latency and a high data transfer rate.

Note: S3 can now scale to high request rates. Your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. However the AWS exam questions are not yet updated reflecting these changes in the questions. Hence the answer for this question is based on the initial request rate performance.

For more information on S3 bucket performance, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/PerformanceOptimization.html


7. A company hosts data in S3. There is a requirement to control access to the S3 buckets. Which are the 2 ways in which this can be achieved?

A. Use Bucket Policies.

B. Use the Secure Token Service.

C. Use IAM user policies.

D. Use AWS Access Keys.

Answer

A. & C.

Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources.

For more information on S3 access control, please refer to the below link:

https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html


8. A company hosts data in S3. There is now a mandate that going forward, all data in the S3 bucket needs to be encrypted at rest. How can this be achieved?

A. Use AWS Access Keys to encrypt the data.

B. Use SSL Certificates to encrypt the data.

C. Enable Server-side encryption on the S3 bucket.

D. Enable MFA on the S3 bucket.

Answer

C. EnableServer-side encryption on the S3 bucket.

Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

For more information on S3 Server-side encryption, please refer to the below link:

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html


9. A company is asking its developers to store application logs in an S3 bucket. These logs are only required for a temporary period of time after which, they can be deleted. Which of the following steps can be used to effectively manage this?

A. Create a cron job to detect the stale logs and delete them accordingly.

B. Use a bucket policy to manage the deletion.

C. Usean IAM Policy to manage the deletion.

D. Use S3 Lifecycle Policies to manage the deletion.

Answer

D. UseS3 Lifecycle Policies to manage the deletion.

Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions – In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf. For more information on S3 Lifecycle Policies, please refer to the URL below.

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

A built-in feature exists to do this job, hence Options A, B and C are not necessary.


10. A company is building a service using Amazon EC2 as a worker instance that will process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS Volume with snapshots

B. A single Amazon Glacier vault

C. A single Amazon S3 bucket

D. Multiple instance stores

Answer

C. A single Amazon S3 bucket

Amazon S3 is the best storage option for this. It is durable and highly available.

For more information on Amazon S3, please refer to the below URL:

https://aws.amazon.com/s3/


11. A company is planning on allowing their users to upload and read objects from an S3 bucket. Due to the numerous amount of users, the read/write traffic will be very high. How should the architect maximize Amazon S3 performance?

A. Prefix each object name with a random string.

B. Use the STANDARD_IA storage class.

C. Prefix each object name with the current data.

D. Enable versioning on the S3 bucket.

Answer

A. Prefix each object name with a random string.

If the request rate is high, you can use hash keys or random strings to prefix to the object name. Here, partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html


12. A company is planning on migrating their infrastructure to AWS. For the data stores , the company does not want to manage the underlying infrastructure. Which of the following would be ideal for this scenario? Choose 2 answers from the options give below

A. AWS S3

B. AWS EBS Volumes

C. AWS DynamoDB

D. AWS EC2

Answer

A. & C.

AWS S3 is object level storage that is completely managed by AWS.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

Option B is incorrect since you need to manage EBS volumes

Option D is incorrect since this is a compute service

For more information on DynamoDB, please refer to the below link

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html For more information on Simple Storage Service, please refer to the below link

https://aws.amazon.com/s3/


13. A company is planning on storing their files from their on-premises location onto the Simple Storage service. After a period of 3 months, they want to archive the files, since they would be rarely used. Which of the following would be the right way to service this requirement?

A. Use an EC2 instance with EBS volumes. After a period of 3 months, keep on taking snapshots of the data.

B. Store the data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier

C. Store the data on Amazon Glacier and then use Lifecycle policies to transfer the data to Amazon S3

D. Use an EC2 instance with EBS volumes. After a period of 3 months , keep on taking copies of the volume using Cold HDD volume type.

Answer

C. Storethe data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier

To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

· Transition actions—Define when objects transition to another storage class. For example, you might choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or archive objects to the GLACIER storage class one year after creating them.

· Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf.

Options A and D are incorrect since using EBS volumes is not the right storage option for this sort of requirement

Option C is incorrect since the files should be initially stored in S3.

For more information on AWS S3 Lifecycle policies, please visit the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


14. A company is planning to store sensitive documents in an S3 bucket. They want to ensure that documents are encrypted at rest. They want to ensure that they manage the underlying keys which are used for encryption. Which of the following can be used for this purpose? Choose 2 answers from the options given below

A. Use S3 server-side encryption with Customer keys

B. Use S3 client-side encryption

C. Use S3 server-side encryption with AWS managed keys

D. Use S3 server-side encryption with AWS KMS keys with Key policy document of size 40kb.

Answer

A. & D.

Server-side encryption is about protecting data at rest. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys. With the encryption key you provide as part of your request, Amazon S3 manages both the encryption, as it writes to disks, and decryption, when you access your objects. Therefore, you don’t need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide.

Options C is incorrect since here you will still not manage the complete lifecycle of the keys.

Options D is incorrect, because the maximum key policy document size is 32kb.

https://docs.aws.amazon.com/kms/latest/developerguide/limits.html

Option E is correct since your own keys can be uploaded to the Key management service.

https://aws.amazon.com/blogs/aws/new-bring-your-own-keys-with-aws-key-management-service/

For more information on Server side encryption with customer keys and Client side encryption, please refer to the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html


15. A company is required to host a static web site in AWS. Which of the following would be an easy and cost-effective way to set this up?

A. Create an AWS Lambda function to insert the required entry for each uploaded file.

B. UseAWS CloudWatch to probe for any S3 event.

C. Add an event with notification send to Lambda.

D. Addthe CloudWatch event to the DynamoDB table streams section.

Answer

C. Add an event with notification send to Lambda.

You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

For more information on AWS S3 web site hosting, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html


16. A company is to run a service on AWS to provide offsite backups for images on laptops and phones.

The solution must support millions of customers with thousands of images per customer. Though the images will be retrieved infrequently, they must be available for retrieval immediately.

Which is the MOST cost efficient storage option that meets these requirements?


A. Amazon Glacier with Expedited retrievals

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

Answer

B. Amazon S3 Standard Infrequent Access

Amazon S3 Infrequent Access is perfect if you want to store data that need not be frequently accessed. It is must more cost effective than Amazon S3 Standard (Option D). And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement, because you would have an increased cost with this option.

For more information on AWS Storage classes, please visit the following URL:

https://aws.amazon.com/s3/storage-classes/


17. A company needs a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. It is required that data is delivered within 5 minutes of a retrieval request.

Which feature in Amazon Glacier can help meet this requirement?


A. Defininga Vault Lock

B. Using Expedited retrieval

C. Using Bulk retrieval

D. Using Standard retrieval

Answer

B. Using Expedited retrieval

The AWS Documentation mentions the following:

Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

For more information on AWS Glacier Retrieval, please visit the following URL:

https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-steps.html


18. A company needs to develop an application that will do the following

– Upload images posted by users – Store the Images in a durable location – Store the metadata about that image in another durable data store

Which of the following should be consider in the design phase?


A. Store the Images in Amazon Glacier and store the metadata in DynamoDB

B. Store the Images in Amazon S3 and store the metadata in Amazon Glacier

C. Store the Images in DynamoDB and store the metadata in Amazon S3

D. Store the Images in Amazon S3 and store the metadata in DynamoDB

Answer

D. Store the Images in Amazon S3 and store the metadata in DynamoDB

Amazon S3 is used for object level storage and should be used to store files such as Images, videos etc. The metadata can be in JSON format which can then be stored in DynamoDB tables.

Options A and B are incorrect since Amazon Glacier is used for archive storage

Option C is incorrect since you cannot store images in DynamoDB

For more information on Amazon S3 and DynamoDB, please refer to the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html


19. A company needs to store images that are uploaded by users via a mobile application. There is also a need to ensure that a security measure is in place to avoid the data loss.

What step should be taken for protection against unintended user actions?


A. Store data in an EBS volume and create snapshots once a week.

B. Store data in an S3 bucket and enable versioning.

C. Store data in two S3 buckets in different AWS regions.

D. Store data on EC2 instance storage.

Answer

B. Store data in an S3 bucket and enable versioning.

Amazon S3 has an option for versioning as shown below. Versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on AWS S3 versioning, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

Option A is invalid as it does not offer protection against accidental deletion of files.

Option C is invalid as S3 buckets are global.

Option D is ephemeral.


20. A company plans to have their application hosted in AWS. This application has users uploading files and then using a public URL for downloading them at a later stage. Which of the following designs would help fulfill this requirement?

A. Have EBS Volumes hosted on EC2 Instances to store the files.

B. Use Amazon S3 to host the files.

C. Use Amazon Glacier to host the files since this would be the cheapest storage option.

D. Use EBS Snapshots attached to EC2 Instances to store the files.

Answer

B. Use Amazon S3 to host the files.

If you need storage for the Internet, AWS Simple Storage Service is the best option. Each uploaded file automatically gets a public URL, which can be used to download the file at a later point in time.

For more information on Amazon S3, please refer to the below URL:

https://aws.amazon.com/s3/ Options A and D are incorrect because EBS Volumes or Snapshots do not have Public URL.

Option C is incorrect because Glacier is mainly used for data archiving purposes.


21. A company stores its log data in an S3 bucket. There is a current need to have search capabilities available for the data in S3. How can this be achieved in an efficient and ongoing manner? Choose 2 answers from the options below. Each answer forms a part of the solution.

A. Use an AWS Lambda function which gets triggered whenever data is added to the S3bucket.

B. Create a Lifecycle Policy for the S3 bucket.

C. Load the data into Amazon Elasticsearch.

D. Load the data into Glacier.

Answer

A & C

AWS Elasticsearch provides full search capabilities and can be used for log files stored in the S3 bucket.

AWS Documentation mentions the following with regard to the integration of Elasticsearch with S3:

You can integrate your Amazon ES domain with Amazon S3 and AWS Lambda. Any new data sent to an S3 bucket triggers an event notification to Lambda, which then runs your custom Java or Node.js application code. After your application processes the data, it streams the data to your domain.

For more information on integration between Elasticsearch and S3, please visit the following URL:

https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html


22. A company wants to store their documents in AWS. Initially, these documents will be used frequently, and after a duration of 6 months, they will need to be archived. How would you architect this requirement?

A. Store the files in Amazon EBS and create a Lifecycle Policy to remove the files after 6 months.

B. Store the files in Amazon S3 and create a Lifecycle Policy to archive the files after 6 months.

C. Store the files in Amazon Glacier and create a Lifecycle Policy to remove the filesafter 6 months.

D. Store the files in Amazon EFS and create a Lifecycle Policy to remove the files after 6 months.

Answer

B. Store the files in Amazon S3 and create a Lifecycle Policy to archive the files after 6 months.

Based on the New S3 announcement (S3 performance)Amazon S3 now provides increased request rate performance. But AWS not yet updated the exam Questions. So as per exam Option B is the correct answer.

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key name. From the hash, pick a specific number of characters, and add them as the prefix to the key name. The following example shows key names with a four-character hash.


23. A million images are required to be uploaded to S3. What option ensures optimal performance in this case?

A. Usea sequential ID for the prefix.

B. Use a hexadecimal hash for the prefix.

C. Use a hexadecimal hash for the suffix.

D. Use a sequential ID for the suffix.

Answer

B. Use a hexadecimal hash for the prefix.

For more information on S3 performance considerations, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

Note: Amazon S3 maintains an index of object key names in each AWS Region. Object keys are stored in UTF-8 binary ordering across multiple partitions in the index. The key name determines which partition the key is stored in. Using a sequential prefix, such as a timestamp or an alphabetical sequence, increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, which can overwhelm the I/O capacity of the partition.

If your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By introducing randomness to your key names, the I/O load is distributed across multiple index partitions. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key, and add three or four characters from the hash as a prefix to the key name.


24. A Solutions Architect designing a solution to store and archive corporate documents, has determined Amazon Glacier as the right choice of solution.

An important requirement is that the data must be delivered within 10 minutes of a retrieval request.

Which feature in Amazon Glacier can help meet this requirement?


A. Vault Lock

B. Expedited retrieval

C. Bulk retrieval

D. Standard retrieval

Answer

B. Expedited retrieval

Expedited retrievals to access data in 1 – 5 minutes for a flat rate of $0.03 per GB retrieved. Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

For more information on AWS Glacier Retrieval, please visit the following URL:

https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-steps.html The other two are standard ( 3-5 hours retrieval time) and Bulk retrievals which is the cheapest option.(5-12 hours retrieval time)


25. A Solutions Architect is designing a highly scalable system to track records. These records must remain available for immediate download for up to three months and then must be deleted.

What is the most appropriate decision for this use case?


A. Store the files in Amazon EBS and create a Lifecycle Policy to remove files after 3 months

B. Store the files in Amazon S3 and create a Lifecycle Policy to remove files after 3 months

C. Store the files in Amazon Glacier and create a Lifecycle Policy to remove files after 3 months

D. Store the files in Amazon EFS and create a Lifecycle Policy to remove files after 3 months

Answer

B. Store the files in Amazon S3 and create a Lifecycle Policy to remove files after 3 months.

Option A is invalid, since the records need to be stored in a highly scalable system.

Option C is invalid, since the records must be available for immediate download.

Option D is invalid, because it does not have the concept of a Lifecycle Policy.

AWS Documentation mentions the following on Lifecycle Policies:

Lifecycle configuration enables you to specify the Lifecycle Management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

Transition actions – In which you define when the objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.

Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle Policies, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


26. A Solutions Architect is designing a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. Data has be retrieved within 3-5 hrs as directed by the management.

Which feature in Amazon Glacier can help meet this requirement and ensure cost-effectiveness?


A. Vault Lock

B. Expedited retrieval

C. Bulk retrieval

D. Standard retrieval

Answer

D. Standard retrieval

Standard retrievals are a low-cost way to access your data within just a few hours. For example, you can use Standard retrievals to restore backup data, retrieve archived media content for same-day editing or distribution, or pull and analyze logs to drive business decisions within hours.

For more information on Amazon Glacier retrievals, please visit the following URL:

https://aws.amazon.com/glacier/faqs/#dataretrievals


27. A Solutions Architect is developing a document sharing application and needs a storage layer. The storage should provide automatic support for versioning so that users can easily roll back to a previous version or recover a deleted account.

Which AWS service will meet the above requirements?


A. Amazon S3

B. Amazon EBS

C. Amazon EFS

D. Amazon Storage Gateway VTL

Answer

A. Amazon S3

Option B is incorrect. EBS provides persistent block storage volumes for use with EC2. Option C is incorrect. EFS is an elastic and scalable file storage. Option D is incorrect. AWS Storage Gateway VTL helps to integrate your on premise IT infrastructure with AWS storage.


28. A storage solution is required in AWS to store videos uploaded by the user. After a period of a month, these videos can be deleted. How should this be implemented in an cost-effective manner?

A. Use EBS Volumes to store the videos. Create a script to delete the videos after amonth.

B. Use transition rule in S3 to move the files to Glacier and use expiration rule to delete it after 30 days.

C. Store the videos in Amazon Glacier and then use Lifecycle Policies.

D. Store the videos using Stored Volumes. Create a script to delete the videos after amonth.

Answer

B. Use transition rule in S3 to move the files to Glacier and use expiration rule to delete it after 30 days.

Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle policies, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

Note: Yes, if we delete the data within 30 days, we will incur certain charges. And the question says that “How should this be implemented in an cost-effective manner?” The charge which is going to incur because of not storing data for 90 days in Glacier is would be less than storing in S3.

Further, in the given options we need to choose the cost-effective option, that doesn’t mean it has to be the most cost-effective.


29. An application allows a manufacturing site to upload files. Each uploaded 3 GB file is processed to extract metadata, and this process takes a few seconds per file. The frequency at which the uploads happen is unpredictable. For instance, there may be no updates for hours, followed by several files being uploaded concurrently.

What architecture addresses this workload in the most cost efficient manner?


A. Use a Kinesis Data Delivery Stream to store the file. Use Lambda for processing.

B. Use an SQS queue to store the file, to be accessed by a fleet of EC2Instances.

C. Store the file in an EBS volume, which can then be accessed by another EC2 Instancefor processing.

D. Store the file in an S3 bucket. Use Amazon S3 event notification to invoke aLambda function for file processing.

Answer

D. Store the file in an S3 bucket. Use Amazon S3 event notification to invoke aLambda function for file processing.

You can first create a Lambda function with the code to process the file. You can then use an Event Notification from the S3 bucket to invoke the Lambda function whenever a file is uploaded. For more information on Amazon S3 event notification, please visit the following URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html Option A is incorrect. Kinesis is used to collect, process and analyze real time data. The frequency of updates are quite unpredictable. By default SQS uses short polling. In this case, it will lead to the cost factor going up since we are getting messages in an unpredictable manner and many a times it will be returning empty responses. Hence option B is not a solution.


30. An application allows users to upload images to an S3 bucket. Initially these images will be downloaded quite frequently, but after some time, the images might only be accessed once a week and the retrieval time should be as minimal as possible.

What could be done to ensure a COST effective solution? Choose 2 answers from the options below. Each answer forms part of the solution.


A. Store the objects in Amazon Glacier.

B. Store the objects in S3 – Standard storage.

C. Create a Lifecycle Policy to transfer the objects to S3 – Standard storage after acertain duration of time.

D. Create a Lifecycle Policy to transfer the objects to S3 – Infrequent Access storageafter a certain duration of time.

Answer

B. & D.

Store the images initially in Standard storage since they are accessed frequently. Define Lifecycle Policies to move the images to Infrequent Access storage to save on costs.

Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed, and is must more cost-effective than Option D i.e. Amazon S3 Standard. Also, if you choose Amazon Glacier with expedited retrievals, you defeat the whole purpose of the requirement, because this option would result in increased costs.

For more information on AWS Storage classes, please visit the following URL:

https://aws.amazon.com/s3/storage-classes/


31. An application hosted in AWS allows users to upload videos to an S3 bucket. A user is required to be given access to upload some videos for a week based on the profile. How can be this be accomplished in the best way possible?

A. Create an IAM bucket policy to provide access for a week’s duration.

B. Create a pre-signed URL for each profile which will last for a week’s duration.

C. Create an S3 bucket policy to provide access for a week’s duration.

D. Create an IAM role to provide access for a week’s duration.

Answer

B. Create a pre-signed URL for each profile which will last for a week’s duration.

Pre-signed URL’s are the perfect solution when you want to give temporary access to users for S3 buckets. So, whenever a new profile is created, you can create a pre-signed URL to ensure that the URL lasts for a week and allows users to upload the required objects.

For more information on pre-signed URL’s, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html


32. An application reads and writes objects to an S3 bucket. When the application is fully deployed, the read/write traffic is very high.

How should the architect maximize the Amazon S3 performance?


A. Use as many S3 prefixes as you need in parallel to achieve the required throughput.

B. Use the STANDARD_IA storage class.

C. Prefix each object name with a hex hash key along with the current data.

D. Enable versioning on the S3 bucket.

Answer

C. Prefix each object name with a hex hash key along with the current data.

Based on the S3 new performance announcement, ” S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance.” But Amazon exam questions and answers not yet updated. So Option C is correct answer as per AWS exam.


33. An organization has a requirement to store 10TB worth of scanned files. They are required to have a search application in place to search through the scanned files.

Which of the below mentioned options is ideal for implementing the search facility?


A. Use S3 with reduced redundancy to store and serve the scanned files. Install acommercial search application on EC2 Instances and configure with Auto-Scaling and an ElasticLoad Balancer.

B. Model the environment using CloudFormation. Use an EC2 instance running Apachewebserver and an open source search application, stripe multiple standard EBSvolumes together to store the scanned files with a search index.

C. Use S3 with standard redundancy to store and serve the scanned files. UseCloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple Availability Zones.

D. Usea single-AZ RDS MySQL instance to store the search index for the scanned files anduse an EC2 instance with a custom application to search based on the index.

Answer

C. Use S3 with standard redundancy to store and serve the scanned files. UseCloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple Availability Zones.

With Amazon CloudSearch, you can quickly add rich search capabilities to your website or application. You don’t need to become a search expert or worry about hardware provisioning, setup, and maintenance. With a few clicks in the AWS Management Console, you can create a search domain and upload the data that you want to make searchable, and Amazon CloudSearch will automatically provision the required resources and deploy a highly tuned search index.

You can easily change your search parameters, fine tune search relevance, and apply new settings at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales to meet your needs.

For more information on AWS CloudSearch , please visit the below link:

https://aws.amazon.com/cloudsearch/


34. As a solutions architect, it is your job to design for high availability and fault tolerance. Company-A is utilizing Amazon S3 to store large amounts of file data. You need to ensure that the files are available in case of a disaster. How can you achieve this?

A. Copy the S3 bucket to an EBS optimized backed EC2 instance

B. AmazonS3 is highly available and fault tolerant by design and requires no additional configuration

C. Enable Cross-Region Replication for the bucket

D. Enable versioning for the bucket

Answer

C. EnableCross-Region Replication for the bucket

Cross-region replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions. We refer to these buckets as source bucket and destination bucket. These buckets can be owned by different AWS accounts.

Option A is invalid because this is not the right way to take backups of an S3 bucket

Option B is invalid because yes S3 will ensure objects are available in multiple availability zones but not across regions in case of a disaster

Option D is invalid because versioning can only help from accidental deletion of objects but not from disaster recovery

For more information on cross region replication, please visit the url

https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html


35. Development teams in your organization use S3 buckets to store log files for various applications hosted in AWS development environments. The developers intend to keep the logs for a month for troubleshooting purposes, and subsequently purge the logs.

What feature will enable this requirement?


A. Adding a bucket policy on the S3 bucket.

B. Configuring lifecycle configuration rules on the S3 bucket.

C.

Creatingan IAM policy for the S3 bucket.

D. Enabling CORS on the S3 bucket.

Answer

B. Configuring lifecycle configuration rules on the S3 bucket.

AWS Documentation mentions the following on Lifecycle policies:

Lifecycle configuration enables you to specify the Lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions – In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf. For more information on AWS S3 Lifecycle policies, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

Option D is for Sharing resources between regions.


36. There is a requirement to store documents in AWS and the documents need to be version controlled. Which of the following storage options would be an ideal choice for this scenario?

A. Amazon S3

B. Amazon EBS

C. Amazon EFS

D. Amazon Glacier

Answer

A. AmazonS3

For more information on Amazon S3, please visit the following URL:

https://aws.amazon.com/s3/


37. There is a requirement to upload a million files to S3. Which of the following can be used to ensure optimal performance?

A. Use a date for the prefix.

B. Use a hexadecimal hash for the prefix.

C. Use a date for the suffix.

D. Use a sequential ID for the suffix.

Answer
B. Use a hexadecimal hash for the prefix.

38. Users within a company need a place to store their documents. Each user must have his/her own location for placing the set of documents and should not be able to view another person’s documents. Also, users should be able to retrieve their documents easily. Which AWS service would be ideal for this requirement?

A. AWS Simple Storage Service

B. AWS Glacier

C. AWS Redshift

D. AWS RDS MySQL

Answer

A . AWS Simple Storage Service

The Simple Storage Service is the perfect place to store the documents. You can define buckets for each user and have policies which restrict access so that each user can only access his/her own files.

For more information on the S3 service, please visit the following URL:

https://aws.amazon.com/s3/


39. Videos are uploaded to an S3 bucket, and you need to provide access to users to view the same. What is the best way to do so, while maintaining a good user experience for all users regardless of the region in which they are located?

A. Enable Cross-Region Replication for the S3 bucket to all regions.

B. Use CloudFront with the S3 bucket as the source.

C. Use API Gateway with S3 bucket as the source.

D. Use AWS Lambda functions to deliver the content to users.

Answer

B. Use CloudFront with the S3 bucket as the source.

Amazon CloudFront is a web service that speeds up distribution of static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called Edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the Edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the Edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that Edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.

For more information on Amazon CloudFront, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html


40. You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to receive over 150 PUT requests per second. What should you do to ensure optimal performance?

A. Use Multipart upload.

B. Add a random prefix to the key names.

C. Amazon S3 will automatically manage performance at this scale.

D. Use a predictable naming scheme, such as sequential numbers or date time sequences in the key names.

Answer

B. Add a random prefix to the key names.

Based on the New S3 announcement (S3 performance)Amazon S3 now provides increased request rate performance. But AWS not yet updated the exam Questions. So as per exam Option B is the correct answer.

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key name. From the hash, pick a specific number of characters, and add them as the prefix to the key name. The following example shows key names with a four-character hash.


41. You are designing the following application in AWS. Users will use the application to upload videos and images. The files will then be picked up by a worker process for further processing. Which of the below services should be used in the design of the application. Choose 2 answers from the options given below

A. AWS Simple storage service for storing the videos and images

B. AWS Glacier for storing the videos and images

C. AWS SNS for distributed processing of messages by the worker process

D. AWS SQS for distributed processing of messages by the worker process

Answer

A. & D.

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Option B is incorrect since this is used for archive storage

Option C is incorrect since this is used as a notification service

For more information on S3, please visit the below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html For more information on SQS, please visit the below URL:

https://aws.amazon.com/sqs/


42. You have an application hosted on AWS that writes images to an S3 bucket. The concurrent number of users on the application is expected to reach around 10,000 with approximately 500 reads and writes expected per second. How should the architect maximize Amazon S3 performance?

A. Prefix each object name with a random string.

B. Use the STANDARD_IA storage class.

C. Prefix each object name with the current data.

D. Enable versioning on the S3 bucket.

Answer

A. Prefix each object name with a random string.

If the request rate is high, you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html STANDARD_IA storage class is for infrequent data access. Option C is not a good solution. Versioning does not make any difference to the performance in this case.


43. You have an S3 bucket hosted in AWS which is used to store promotional videos you upload. You need to provide access to users for a limited duration of time. How can this be achieved?

A. Use versioning and enable a timestamp for each version.

B. Use Pre-Signed URLs.

C. Use IAM Roles with a timestamp to limit the access.

D. Use IAM policies with a timestamp to limit the access.

Answer

C. Use Pre-Signed URLs.

All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.

For more information on pre-signed URLs, please visit the URL below.

https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html


44. You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs. What technique should you use to meet these requirements?

A. Store your log in Amazon CloudWatch Logs.

B. Store your logs in Amazon Glacier.

C. Store your logs in Amazon S3, and use Lifecycle Policies to archive to Amazon Glacier.

D. Store your logs on Amazon EBS, and use Amazon EBS Snapshots to archive them.

Answer

C. Store your logs in Amazon S3, and use Lifecycle Policies to archive to Amazon Glacier.

Option A is invalid, because it is not a cost-effective option.

Option B is invalid, because it will not serve the purpose of regularly retrieving the most recent logs for troubleshooting. You will need to pay more to retrieve the logs faster from this storage option.

Option D is invalid because it is neither an ideal nor cost-effective option.

For more information on Lifecycle management please refer to the below link:

http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


45. You need to ensure that data stored in S3 is encrypted but do not want to manage the encryption keys. Which of the following encryption mechanisms can be used in this case?

A. SSE-S3

B. SSE-C

C. SSE-KMS

D. SSE-SSL

Answer

A. SSE-S3

SSE-S3 requires that Amazon S3 manages the data and master encryption keys. SSE-C requires that you manage the encryption keys. SSE-KMS requires that AWS manages the data key but you manage the master key in AWS KMS. For more information on using the Key Management service for S3, please visit the below URL:

https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html


46. You need to ensure that objects in an S3 bucket are available in another region. This is because of the criticality of the data that is hosted in the S3 bucket. How can you achieve this in the easiest way possible?

A. Enable Cross-Region Replication for the bucket.

B. Write a script to copy the objects to another bucket in the destination region.

C. Create an S3 snapshot in the destination region.

D. Enable versioning which will copy the objects to the destination region.

Answer

A. Enable Cross-Region Replication for the bucket.

Cross-Region Replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions.

For more information on Cross-Region Replication in the Simple Storage Service, please visit the below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html


47. You need to have the ability to archive documents in AWS. This needs to be a cost-effective solution. Which of the following would you use to meet this requirement?

A. Amazon Glacier

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

Answer

A. Amazon Glacier

Amazon Glacier is an extremely low-cost storage service that provides durable storage with security features for data archiving and backup. With Amazon Glacier, customers can store their data cost effectively for months, years, or even decades. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and recovery, or time-consuming hardware migrations.

For more information on Amazon Glacier, please visit the following URL:

https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html


48. You work for a company that stores records for a minimum of 10 years. Most of these records will never be accessed but must be made available upon request (within a few hours). What is the most cost-effective storage option in this scenario? Choose the correct answer from the options below.

A. Simple Storage Service

B. EBS Volumes

C. Glacier

D. AWS Import/Export

Answer

C. Glacier

Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, from a few minutes to several hours.

For more information on Amazon Glacier, please refer to the link below.

https://aws.amazon.com/glacier/


49. Your company currently has an S3 bucket in AWS. The objects in S3 are accessed quite frequently. Which of the following is an implementation step that can be considered to reduce the cost of accessing contents from the S3 bucket?

A. Place the S3 bucket behind a CloudFront distribution.

B. Enable Versioning on the S3 bucket.

C. Enable Encryption on the S3 bucket.

D. Place the S3 bucket behind an API Gateway.

Answer

A. Place the S3 bucket behind a CloudFront distribution.

Using CloudFront can be more cost effective if your users access your objects frequently because, at higher usage, the price for CloudFront data transfer is lower than the price for Amazon S3 data transfer. In addition, downloads are faster with CloudFront than with Amazon S3 alone because your objects are stored closer to your users.

For more information on using Cloudfront with S3, please visit the below URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/MigrateS3ToCloudFront.html


50. Your company currently has an S3 bucket in AWS. The objects in S3 are accessed quite frequently. Which of the following is an implementation step that can be considered to reduce the cost of accessing contents from the S3 bucket?

A. Place the S3 bucket behind a CloudFront distribution.

B. Enable Versioning on the S3 bucket.

C. Enable Encryption on the S3 bucket.

D. Place the S3 bucket behind an API Gateway.

Answer

A. Place the S3 bucket behind a CloudFront distribution.

Using CloudFront can be more cost effective if your users access your objects frequently because, at higher usage, the price for CloudFront data transfer is lower than the price for Amazon S3 data transfer. In addition, downloads are faster with CloudFront than with Amazon S3 alone because your objects are stored closer to your users.

For more information on using Cloudfront with S3, please visit the below URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/MigrateS3ToCloudFront.html


51. Your company currently stores documents in an S3 bucket. They want to transfer the files to a low-cost storage unit after a duration of 2 months to save on cost. Which of the following can be used to perform this activity automatically?

A. Use the events of the S3 bucket to transfer the files to Amazon Glacier

B. Use the events of the S3 bucket to transfer the files to EBS volumes – Cold HDD

C. Use the lifecycle policies of the S3 bucket to transfer the files to Amazon Glacier

D. Use the lifecycle policies of the S3 bucket to transfer the files to EBS volumes – Cold HDD

Answer

C. Use the lifecycle policies of the S3 bucket to transfer the files to Amazon Glacier

To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

– Transition actions—Define when objects transition to another storage class. For example, you might choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or archive objects to the GLACIER storage class one year after creating them. – Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf.

Options B and D are incorrect because ideally you don’t transfer to EBS volumes – Cold HDD

Option A is incorrect because you need to use lifecycle policies

For more information on lifecycle policies, please refer to the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


52. Your company has a requirement to host a static web site in AWS. Which of the following steps would help implement a quick and cost-effective solution for this requirement? Choose 2 answers from the options given below. Each answer forms a part of the solution.

A. Uploadthe static content to an S3 bucket.

B. Createan EC2 Instance and install a web server.

C. Enableweb site hosting for the S3 bucket.

D. Uploadthe code to the web server on the EC2 Instance.

Answer

A and C

You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

For more information on static web site hosting using S3, please refer to the URL below.

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html


53. Your company has confidential documents stored in the Simple Storage Service. Due to compliance requirements, there is a need for the data in the S3 bucket to be available in a different geographical location. As an architect, what change would you make to comply with this requirement?

A. Apply Multi-AZ for the underlying S3 bucket.

B. Copy the data to an EBS Volume in another region.

C. Create a snapshot of the S3 bucket and copy it to another region.

D. Enable Cross-Region Replication for the S3 bucket.

Answer

D. EnableCross-Region Replication for the S3 bucket.

This is mentioned clearly as a use case for S3 Cross-Region Replication.

You might configure Cross-Region Replication on a bucket for various reasons, including the following:

Compliance requirements – Although, by default, Amazon S3 stores your data across multiple geographically distant Availability Zones, compliance requirements might dictate that you store data at even further distances. Cross-region replication allows you to replicate data between distant AWS Regions to satisfy these compliance requirements. For more information on S3 Cross-Region Replication, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html


54. Your company has started hosting their data store on AWS by using the Simple Storage service. They are storing files which are downloaded by users on a frequent basis. After a duration of 3 months, the files need to transferred to archive storage since they are not used beyond this point. Which of the following could be used to effectively manage this requirement?

A. Transfer the files via scripts from S3 to Glacier after a period of 3 months

B. Use Lifecycle policies to transfer the files onto Glacier after a period of 3months

C. Use Lifecycle policies to transfer the files onto Cold HDD after a period of 3months

D. Create a snapshot of the files in S3 after a period of 3 months

Answer

B. UseLifecycle policies to transfer the files onto Glacier after a period of 3 months

To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

Transition actions—Define when objects transition to another storage class. For example, you might choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or archive objects to the GLACIER storage class one year after creating them. Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf. The lifecycle expiration costs depend on when you choose to expire objects.

Option A is invalid since there is already the option of lifecycle policies

Option C is invalid since lifecycle policies are used to transfer to Glacier or S3-Infrequent Access

Option D is invalid since snapshots are used for EBS volumes

For more information on S3 lifecycle policies, please visit the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


55. Your company is planning on moving to the AWS Cloud. There is a strict compliance policy that mandates that data should be encrypted at rest. As an AWS Solution architect, you have been tasked to put the organization data on the cloud and also ensure that all compliance requirements have been met. Which of the below needs to be part of the implementation plan to ensure compliance with the security requirements. Choose 2 answers from the options given below.

A. Ensurethat all EBS volumes are encrypted

B. Ensurethat server-side encryption is enabled for S3 buckets

C. Ensurethat SSL is enabled for all load balancers

D. Ensurethat the EC2 Security rules only allow HTTPS traffic

Answer

A. & B.

Amazon EBS encryption offers a simple encryption solution for your EBS volumes without the need to build, maintain, and secure your own key management infrastructure.

Server-side encryption protects data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) uses strong multi-factor encryption. Amazon S3 encrypts each object with a unique key

Options C and D are invalid since these are used to manage encryption of data in transit

For more information on Encryption of EBS volumes, please visit the url

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html For more information on Encryption of S3 buckets, please visit the url

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html


56. Your company is planning to store sensitive documents in a bucket in the Simple Storage service. They need to ensure that all objects are encrypted at rest in the bucket. Which of the following can help accomplish this? Choose 2 answers from the options given below

A. Ensure that the default encryption is enabled for the S3 bucket

B. Ensure that the bucket policy is set to encrypt all objects that are added to the bucket

C. Ensure that the bucket ACL is set to encrypt all objects that are added to the bucket

D. Ensure to change the configuration of the bucket to use a KMS key to encrypt the objects

Answer

A. & D.

Options B and C are incorrect since these options cannot be used to actually encrypt the objects

The AWS Documentation mentions the following

You have three mutually exclusive options depending on how you choose to manage the encryption keys:

– Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) – Each object is encrypted with a unique key employing strong multi-factor encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates..

– Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) – Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. There are separate permissions for the use of an envelope key (that is, a key that protects your data’s encryption key) that provides added protection against unauthorized access of your objects in S3.

– Use Server-Side Encryption with Customer-Provided Keys (SSE-C) – You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your objects.

For more information on Server – Side encryption, please refer to the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html


57. Your company is planning to store sensitive documents in a bucket in the Simple Storage service. They want to keep the documents as private but serve content only to select users based on a particular time frame. Which of the following can help you accomplish this?

A. Enable CORS for the S3 bucket

B. Use KMS and enable encryption for the files

C. Create pre-signed URL’s

D. Enable versioning for the S3 bucket

Answer

C. Create pre-signed URL’s

A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permissions to upload that object.

All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions. When you create a pre-signed URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration.

Option A is incorrect since this is used for Cross origin access

Option B is incorrect since this is used for encryption purposes.

Option D is incorrect since this is used for versioning

For more information on pre-signed URL’s, please refer to the below URL

https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html


58. Your company needs to have a data store in AWS to store data documents. These documents are not accessed that frequently. But when the document does get requested, it needs to be available within 20 minutes. Which of the following would be an ideal cost effective data store?

A. S3 Infrequent Access

B. Glacier – Bulk retrieval

C. Glacier – Standard Retrieval

D. S3 Standard Storage

Answer

A. S3 Infrequent Access

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery.

Options B and C are incorrect since the data retrieval time is more than 20 minutes and does not meet the requirements of the question

Option D is incorrect since this would not be a cost-effective option

For more information on the different storage classes, please refer to the below URL

https://aws.amazon.com/s3/storage-classes/


59. Your company needs to keep all system logs for audit purposes, and may rarely need to retrieve these logs for audit purposes and present them upon request within a week. The logs are 10TB in size. Which option would be the most cost-effective one for storing all these system logs?

A. Amazon Glacier

B. S3-ReducedRedundancy Storage

C. EBS backed storage connected to EC2

D. AWS CloudFront

Answer

A. Amazon Glacier

Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. It is designed to deliver 99.999999999% durability, and provides comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

For more information on Amazon Glacier, please refer to the below URL:

https://aws.amazon.com/glacier/


60. Your IT Supervisor is worried about users accidentally deleting objects in an S3 bucket. Which of the following can help prevent accidental deletion of objects in an S3 bucket? Choose 2 answers from the options given below.

A. Enable encryption for the S3 bucket.

B. Enable MFA Delete on the S3 bucket.

C. Enable Versioning on the S3 bucket.

D. Enable IAM Roles on the S3 bucket.

Answer

B. & C.

When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored.

Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security. By default, all requests to your Amazon S3 bucket require your AWS account credentials. If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession.

For more information on the features of S3, please visit the following URL:

https://aws.amazon.com/s3/faqs/