Skip to main content

AWS S3 Simple Storage Service

                            Amazon S3 (Simple Storage Service)

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

Image result for aws s3

Image result for aws s3

Related image


Related image


What Is Amazon S3?

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers.
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.
This guide explains the core concepts of Amazon S3, such as buckets and objects, and how to work with these resources using the Amazon S3 application programming interface (API).


Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3 Standard for general-purpose storage of frequently accessed data, Amazon S3 Standard - Infrequent Access for long-lived, but less frequently accessed data, and Amazon Glacier for long-term archive. Amazon S3 also offers configurable lifecycle policies for managing your data throughout its lifecycle. Once a policy is set, your data will automatically migrate to the most appropriate storage class without any changes to your application.

Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, Standard is perfect for a wide variety of use cases including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. Lifecycle management offers configurable policies to automatically migrate objects to the most appropriate storage class.
Key Features:
  • Low latency and high throughput performance
  • Designed for durability of 99.999999999% of objects
  • Designed for 99.99% availability over a given year
  • Backed with the Amazon S3 Service Level Agreement for availability.
  • Supports SSL encryption of data in transit and at rest
  • Lifecycle management for automatic migration of objects

Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the object level and can exist in the same bucket as Standard, allowing you to use lifecycle policies to automatically transition objects between storage classes without any application changes.
Key Features:
  • Same low latency and high throughput performance of Standard
  • Designed for durability of 99.999999999% of objects
  • Designed for 99.9% availability over a given year
  • Backed with the Amazon S3 Service Level Agreement for availability
  • Supports SSL encryption of data in transit and at rest
  • Lifecycle management for automatic migration of objects

Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving. You can reliably store any amount of data at costs that are competitive with or cheaper than on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, from a few minutes to several hours. Amazon Glacier supports lifecycle policies for automatic migration between storage classes. Please see the Amazon Glacier page for more details.
Key Features:
  • Designed for durability of 99.999999999% of objects
  • Supports SSL encryption of data in transit and at rest
  • Vault Lock feature enforces compliance via a lockable policy
  • Extremely low cost design is ideal for long-term archive
  • Lifecycle management for automatic migration of objects

StandardStandard - IAAmazon Glacier
Designed for Durability99.999999999%99.999999999%99.999999999%
Designed for Availability99.99%99.9%N/A
Availability SLA99.9%99%N/A
Minimum Object SizeN/A128KB*N/A
Minimum Storage DurationN/A30 days90 days
Retrieval FeeN/Aper GB retrievedper GB retrieved**
First Byte Latencymillisecondsmillisecondsselect minutes or hours***
Storage Classobject levelobject levelobject level
Lifecycle Transitionsyesyesyes

* Standard - IA has a minimum object size of 128KB. Smaller objects will be charged for 128KB of storage. 
** Amazon Glacier allows you to select from multiple retrieval tiers based upon your needs. Learn more.
Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. For information about the Reduced Redundancy Storage (RRS) class, please see the S3 Reduced Redundancy detail page.

Amazon S3 Service Level Agreement



This Amazon S3 Service Level Agreement (“SLA”) is a policy governing the use of Amazon Simple Storage Service (“Amazon S3”) under the terms of the Amazon Web Services Customer Agreement (the “AWS Agreement”) between Amazon Web Services, Inc. and its affiliates (“AWS”, “us” or “we”) and users of AWS’ services (“you”). This SLA applies separately to each account using Amazon S3. Unless otherwise provided herein, this SLA is subject to the terms of the AWS Agreement and capitalized terms will have the meaning specified in the AWS Agreement. We reserve the right to change the terms of this SLA in accordance with the AWS Agreement.
AWS will use commercially reasonable efforts to make Amazon S3 available with the applicable Monthly Uptime Percentage (as defined below) during any monthly billing cycle (the “Service Commitment”). In the event Amazon S3 does not meet the Service Commitment, you will be eligible to receive a Service Credit as described below.
  • “Error Rate” means: (i) the total number of internal server errors returned by Amazon S3 as error status “InternalError” or “ServiceUnavailable” divided by (ii) the total number of requests for the applicable request type during that five minute period. We will calculate the Error Rate for each Amazon S3 account as a percentage for each five minute period in the monthly billing cycle. The calculation of the number of internal server errors will not include errors that arise directly or indirectly as a result of any of the Amazon S3 SLA Exclusions (as defined below).
  • “Monthly Uptime Percentage” is calculated by subtracting from 100% the average of the Error Rates from each five minute period in the monthly billing cycle.
  • A “Service Credit” is a dollar credit, calculated as set forth below, that we may credit back to an eligible Amazon S3 account.
Service Credits are calculated as a percentage of the total charges paid by you for Amazon S3 for the billing cycle in which the error occurred in accordance with the schedule below.
For all requests not otherwise specified below:
Monthly Uptime PercentageService Credit Percentage
Equal to or greater than 99.0% but less than 99.9%10%
Less than 99.0%25%
For requests to Amazon S3 Standard – Infrequent Access (Standard-IA):
Monthly Uptime PercentageService Credit Percentage
Equal to or greater than 98.0% but less than 99.0%10%
Less than 98.0%25%
We will apply any Service Credits only against future Amazon S3 payments otherwise due from you. At our discretion, we may issue the Service Credit to the credit card you used to pay for the billing cycle in which the error occurred. Service Credits will not entitle you to any refund or other payment from AWS. A Service Credit will be applicable and issued only if the credit amount for the applicable monthly billing cycle is greater than one dollar ($1 USD). Service Credits may not be transferred or applied to any other account. Unless otherwise provided in the AWS Agreement, your sole and exclusive remedy for any unavailability, non-performance, or other failure by us to provide Amazon S3 is the receipt of a Service Credit (if eligible) in accordance with the terms of this SLA.
To receive a Service Credit, you must submit a claim by opening a case in the AWS Support Center. To be eligible, the credit request must be received by us by the end of the second billing cycle after which the incident occurred and must include:
  1. the words “SLA Credit Request” in the subject line;
  2. the dates and times of each incident of non-zero Error Rates that you are claiming; and
  3. your request logs that document the errors and corroborate your claimed outage (any confidential or sensitive information in these logs should be removed or replaced with asterisks).
If the Monthly Uptime Percentage applicable to the month of such request is confirmed by us and is less than the applicable Service Commitment, then we will issue the Service Credit to you within one billing cycle following the month in which your request is confirmed by us. Your failure to provide the request and other information as required above will disqualify you from receiving a Service Credit.
The Service Commitment does not apply to any unavailability, suspension or termination of Amazon S3, or any other Amazon S3 performance issues: (i) that result from a suspension described in Section 6.1 of the AWS Agreement; (ii) caused by factors outside of our reasonable control, including any force majeure event or Internet access or related problems beyond the demarcation point of Amazon S3; (iii) that result from any actions or inactions of you or any third party; (iv) that result from your equipment, software or other technology and/or third party equipment, software or other technology (other than third party equipment within our direct control); or (v) arising from our suspension and termination of your right to use Amazon S3 in accordance with the AWS Agreement (collectively, the “Amazon S3 SLA Exclusions”). If availability is impacted by factors other than those used in our calculation of the Error Rate, then we may issue a Service Credit considering such factors at our discretion.

Amazon S3 stores data as objects within resources called "buckets". You can store as many objects as you want within a bucket, and write, read, and delete objects in your bucket. Objects can be up to 5 terabytes in size.
You can control access to the bucket (who can create, delete, and retrieve objects in the bucket for example), view access logs for the bucket and its objects, and choose the AWS region where a bucket is stored to optimize for latency, minimize costs, or address regulatory requirements.

How Do I Create an S3 Bucket?

Before you can upload data to Amazon S3, you must create a bucket in one of the AWS Regions to store your data in. After you create a bucket, you can upload an unlimited number of data objects to the bucket.
Buckets have configuration properties, including their geographical region, who has access to the objects in the bucket, and other metadata.
To create an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
    
          Create bucket button in the S3 console.
  3. On the Name and region page, type a name for your bucket and choose the AWS Region where you want the bucket to reside. Complete the fields on this page as follows:
    1. For Bucket name, type a unique DNS-compliant name for your new bucket. Follow these naming guidelines:
      • The name must be unique across all existing bucket names in Amazon S3.
      • The name must not contain uppercase characters.
      • The name must start with a lowercase letter or number.
      • The name must be between 3 and 63 characters long.
      • After you create the bucket you cannot change the name, so choose wisely.
      • Choose a bucket name that reflects the objects in the bucket because the bucket name is visible in the URL that points to the objects that you're going to put in your bucket.
      For information about naming buckets, see Rules for Bucket Naming in the Amazon Simple Storage Service Developer Guide.
    2. For Region, choose the AWS Region where you want the bucket to reside. Choose a Region close to you to minimize latency and costs, or to address regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
    3. (Optional) If you have already set up a bucket that has the same settings that you want to use for the new bucket that you want to create, you can set it up quickly by choosing Copy settings from an existing bucket, and then choosing the bucket whose settings you want to copy.
      The settings for the following bucket properties are copied: versioning, tags, and logging.
    4. Do one of the following:
      • If you copied settings from another bucket, choose Create. You're done, so skip the following steps.
      • If not, choose Next.
    
          Name and region page in the Create bucket wizard.
  4. On the Set properties page, you can configure the following properties for the bucket. Or, you can configure these properties later, after you create the bucket.
    1. Versioning – Versioning enables you to keep multiple versions of an object in one bucket. Versioning is disabled for a new bucket by default. For information on enabling versioning, see How Do I Enable or Suspend Versioning for an S3 Bucket?.
    2. Server access logging – Server access logging provides detailed records for the requests that are made to your bucket. By default, Amazon S3 does not collect server access logs. For information about enabling server access logging, see How Do I Enable Server Access Logging for an S3 Bucket?.
    3. Tags – With AWS cost allocation, you can use tags to annotate billing for your use of a bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags, choose Tags, and then choose Add tag. For more information, see Using Cost Allocation Tags for S3 Buckets in the Amazon Simple Storage Service Developer Guide.
    4. Object-level logging – Object-level logging records object-level API activity by using CloudTrail data events. For information about enabling object-level logging, see How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data Events?.
    
          Set properties page showing versioning, server access logging, tags, and object
            level logging.
  5. Choose Next.
  6. On the Set permissions page, you manage the permissions that are set on the bucket that you are creating. You can grant read access to your bucket to the general public (everyone in the world). Granting public read access is applicable to a small subset of use cases such as when buckets are used for websites. We recommend that you do not change the default setting of Do not grant public read access to this bucket. You can change permissions after you create the bucket. For more information about setting bucket permissions, see How Do I Set ACL Bucket Permissions?.
    Warning
    We highly recommend that you do not grant public read access to the bucket that you are creating. Granting public read access permissions means that anyone in the world can access the objects in the bucket.
    When you're done configuring permissions on the bucket, choose Next.
  7. On the Review page, verify the settings. If you see something you want to change, choose Edit. If your current settings are correct, choose Create bucket.

More Info

How Do I Delete an S3 Bucket?

You can delete a bucket and all of the objects contained in the bucket. You can also delete an empty bucket. When you delete a bucket with versioning enabled, all versions of all the objects in the bucket are deleted. For more information, see Managing Objects in a Versioning-Enabled Bucket and Deleting/Emptying a Bucket in the Amazon Simple Storage Service Developer Guide.
Important
If you want to continue to use the same bucket name, don't delete the bucket. We recommend that you empty the bucket and keep it. After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, it might take some time before the name can be reused and some other account could create a bucket with that name before you do.
To delete an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the bucket icon next to the name of the bucket that you want to delete and then choose Delete bucket.
  3. In the Delete bucket dialog box, type the name of the bucket that you want to delete for confirmation and then choose Confirm.

How Do I Empty an S3 Bucket?

You can empty a bucket, which deletes all of the objects in the bucket without deleting the bucket. When you empty a bucket with versioning enabled, all versions of all the objects in the bucket are deleted. For more information, see Managing Objects in a Versioning-Enabled Bucket and Deleting/Emptying a Bucket in the Amazon Simple Storage Service Developer Guide.
To empty an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the bucket icon next to the name of the bucket that you want to delete and then choose Empty bucket.
  3. In the Empty bucket dialog box, type the name of the bucket you want to empty for confirmation and then choose Confirm.

How Do I View the Properties for an S3 Bucket?

This topic explains how to view the properties for an S3 bucket.
To view the properties for an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want to view the properties for.
  3. Choose Properties.
  4. On the Properties page, you can configure the following properties for the bucket.
    1. Versioning – Versioning enables you to keep multiple versions of an object in one bucket. By default, versioning is disabled for a new bucket. For information about enabling versioning, see How Do I Enable or Suspend Versioning for an S3 Bucket?.
    2. Server access logging – Server access logging provides detailed records for the requests that are made to your bucket. By default, Amazon S3 does not collect server access logs. For information about enabling server access logging, see How Do I Enable Server Access Logging for an S3 Bucket?.
    3. Static website hosting – You can host a static website on Amazon S3. To enable static website hosting, choose Static website hosting and then specify the settings you want to use. For more information, see How Do I Configure an S3 Bucket for Static Website Hosting?.
    4. Object-level logging – Object-level logging records object-level API activity by using CloudTrail data events. For information about enabling object-level logging, see How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data Events?.
    5. Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags, choose Tags, and then choose Add tag. For more information, see Using Cost Allocation Tags for S3 Buckets in the Amazon Simple Storage Service Developer Guide.
    6. Transfer acceleration – Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. For information about enabling transfer acceleration, see How Do I Enable Transfer Acceleration for an S3 Bucket?.
    7. Events – You can enable certain Amazon S3 bucket events to send a notification message to a destination whenever the events occur. To enable events, choose Events and then specify the settings you want to use. For more information, see How Do I Enable and Configure Event Notifications for an S3 Bucket?.
    8. Requester Pays – You can enable Requester Pays so that the requester (instead of the bucket owner) pays for requests and data transfers. For more information, see Requester Pays Buckets in the Amazon Simple Storage Service Developer Guide.

How Do I Enable or Suspend Versioning for an S3 Bucket?

Versioning enables you to keep multiple versions of an object in one bucket. This section describes how to enable object versioning on a bucket. For more information about versioning support in Amazon S3, see Object Versioning and Using Versioning in the Amazon Simple Storage Service Developer Guide.
To enable or disable versioning on an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want to enable versioning for.
  3. Choose Properties.
  4. Choose Versioning.
  5. Choose Enable versioning or Suspend versioning, and then choose Save.

How Do I Enable Server Access Logging for an S3 Bucket?

Server access logging provides detailed records for the requests made to a bucket. Server access logs are useful for many applications because they give bucket owners insight into the nature of requests made by clients not under their control. By default, Amazon Simple Storage Service (Amazon S3) doesn't collect server access logs. This topic describes how to enable logging for a bucket. For more information, see Server Access Logging in the Amazon Simple Storage Service Developer Guide.
When you enable logging, Amazon S3 delivers access logs to a target bucket that you choose. An access log record contains details about the requests made to a bucket. This can include the request type, the resources specified in the request, and the time and date the request was processed. For more information, see Server Access Log Format in the Amazon Simple Storage Service Developer Guide.
Important
There is no extra charge for enabling server access logging on an Amazon S3 bucket. However, any log files that the system delivers to you will accrue the usual charges for storage. (You can delete the log files at any time.) We do not assess data transfer charges for log file delivery, but we do charge the normal data transfer rate for accessing the log files.
To enable server access logging for an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want to enable server access logging for.
  3. Choose Properties.
  4. Choose Server access logging.
  5. Choose Enable Logging. For Target, choose the name of the bucket that you want to receive the log record objects.
  6. (Optional) For Target prefix, type a key name prefix for log objects, so that all of the log objects begin with the same string.
  7. Choose Save.
More Info

How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data Events?

This section describes how to enable an AWS CloudTrail trail to log data events for objects in an S3 bucket by using the Amazon S3 console. CloudTrail supports logging Amazon S3 object-level API operations such as GetObjectDeleteObject, and PutObject. These events are called data events. By default, CloudTrail trails don't log data events, but you can configure trails to log data events for S3 buckets that you specify, or to log data events for all the Amazon S3 buckets in your AWS account.
Important
Additional charges apply for data events. For more information, see AWS CloudTrail Pricing.
To configure a trail to log data events for an S3 bucket, you can use either the AWS CloudTrail console or the Amazon S3 console. If you are configuring a trail to log data events for all the Amazon S3 buckets in your AWS account, it's easier to use the CloudTrail console. For information about using the CloudTrail console to configure a trail to log S3 data events, seeData Events in the AWS CloudTrail User Guide.
The following procedure shows how to use the Amazon S3 console to enable a CloudTrail trail to log data events for an S3 bucket.
To enable CloudTrail data events logging for objects in an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want.
    
          Screenshot showing a bucket in the bucket name list.
  3. Choose Properties.
    
          List of tabs in S3 console with the Properties tab selected.
  4. Choose Object-level logging.
    
          Object-level logging screen.
  5. Choose an existing CloudTrail trail in the drop-down menu. The trail you select must be in the same AWS Region as your bucket, so the drop-down list contains only trails that are in the same Region as the bucket or trails that were created for all Regions.
    If you need to create a trail, choose the CloudTrail console link to go to the CloudTrail console. For information about how to create trails in the CloudTrail console, seeCreating a Trail with the Console in the AWS CloudTrail User Guide.
    
          Choosing a CloudTrail trail in the Object level logging dialog box.
  6. Under Events, select Read to specify that you want CloudTrail to log Amazon S3 read APIs such as GetObject. Select Write to log Amazon S3 write APIs such as PutObject. Select both Read and Write to log both read and write object APIs. For a list of supported data events that CloudTrail logs for Amazon S3 objects, see Amazon S3 Object-Level Actions Tracked by CloudTrail Logging in the Amazon Simple Storage Service Developer Guide.
    
          Object-level logging dialog box with read and write selected.
  7. Choose Create to enable object-level logging for the bucket.
    
          Object-level logging dialog box dialog box displaying Enabled.
    To disable object-level logging for the bucket, you must go to the CloudTrail console and remove the bucket name from the trail's Data events.
    Note
    If you use the CloudTrail console or the Amazon S3 console to configure a trail to log data events for an S3 bucket, the Amazon S3 console shows that object-level logging is enabled for the bucket.
For information about enabling object-level logging when you create an S3 bucket, see How Do I Create an S3 Bucket?.

More Info

How Do I Configure an S3 Bucket for Static Website Hosting?

You can host a static website on Amazon S3. On a static website, individual web pages include static content and they might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
The following is a quick procedure to configure an Amazon S3 bucket for static website hosting in the S3 console. If you’re looking for more in-depth information, as well as walkthroughs on using a custom domain name for your static website or speeding up your website, see Hosting a Static Website on Amazon S3 in the Amazon Simple Storage Service Developer Guide.
To configure an S3 bucket for static website hosting
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want to enable static website hosting for.
  3. Choose Properties.
  4. Choose Static website hosting.
    After you enable your bucket for static website hosting, web browsers can access all of your content through the Amazon S3 website endpoint for your bucket.
  5. Choose Use this bucket to host.
    1. For Index Document, type the name of the index document, which is typically named index.html. When you configure a bucket for website hosting, you must specify an index document. Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. For more information, see Configure a Bucket for Website Hosting in the Amazon Simple Storage Service Developer Guide.
    2. (Optional) For 4XX class errors, you can optionally provide your own custom error document that provides additional guidance for your users. For Error Document, type the name of the file that contains the custom error document. If an error occurs, Amazon S3 returns an HTML error document. For more information, see Custom Error Document Support in the Amazon Simple Storage Service Developer Guide.
    3. (Optional) If you want to specify advanced redirection rules, in the Edit redirection rules text area, use XML to describe the rules. For example, you can conditionally route requests according to specific object key names or prefixes in the request. For more information, see Configure a Bucket for Website Hosting in the Amazon Simple Storage Service Developer Guide.
  6. Choose Save.
  7. Add a bucket policy to the website bucket that grants everyone access to the objects in the bucket. When you configure a bucket as a website, you must make the objects that you want to serve publicly readable. To do so, you write a bucket policy that grants everyone s3:GetObject permission. The following example bucket policy grants everyone access to the objects in the example-bucket bucket.
    Copy
    { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::example-bucket/*" ] } ] }
    For information about adding a bucket policy, see How Do I Add an S3 Bucket Policy?. For more information about website permissions, see Permissions Required for Websitein the Amazon Simple Storage Service Developer Guide.
Note
If you choose Disable website hosting, Amazon S3 removes the website configuration from the bucket, so that the bucket is no longer accessible from the website endpoint. However, the bucket is still available at the REST endpoint. For a list of Amazon S3 endpoints, see Amazon S3 Regions and Endpoints in the Amazon Web Services General Reference.

How Do I Redirect Requests to an S3 Bucket Hosted Website to Another Host?

You can redirect all requests to your S3 bucket hosted static website to another host.
To redirect all requests to an S3 bucket's website endpoint to another host
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. In the Bucket name list, choose the name of the bucket that you want to redirect all requests from.
  3. Choose Properties.
  4. Choose Static website hosting.
  5. Choose Redirect requests.
    1. For Target bucket or domain, type the name of the bucket or the domain name where you want requests to be redirected. To redirect requests to another bucket, type the name of the target bucket. For example, if you are redirecting to a root domain address, you would type www.example.com. For more information, see Configure a Bucket for Website Hosting in the Amazon Simple Storage Service Developer Guide.
    2. For Protocol, type the protocol (http, https) for the redirected requests. If no protocol is specified, the protocol of the original request is used. If you redirect all requests, any request made to the bucket's website endpoint will be redirected to the specified host name.
  6. Choose Save.

Uploading, Downloading, and Managing Objects

Amazon S3 is cloud storage for the Internet. To upload your data (photos, videos, documents etc.), you first create a bucket in one of the AWS Regions. You can then upload an unlimited number of data objects to the bucket.
The data that you store in Amazon S3 consists of objects. Every object resides within a bucket that you create in a specific AWS Region. Every object that you store in Amazon S3 resides in a bucket.
Objects stored in a region never leave the region unless you explicitly transfer them to another region. For example, objects stored in the EU (Ireland) region never leave it. The objects stored in an AWS region physically remain in that region. Amazon S3 does not keep copies of objects or move them to any other region. However, you can access the objects from anywhere, as long as you have necessary permissions to do so.
Before you can upload an object into Amazon S3, you must have write permissions to a bucket.
Objects can be any file type: images, backups, data, movies, etc. The maximum size of file you can upload by using the Amazon S3 console is 78GB. You can have an unlimited number of objects in a bucket.
The following topics explain how to use the Amazon S3 console to upload, delete, and manage objects.
Topics

Storage Management

This section explains how to configure Amazon S3 storage management tools.
Topics

Setting Bucket and Object Access Permissions

The topics in this section explain how to use the Amazon S3 console to grant access permissions to your buckets and objects by using resource-based access policies. An access policy describes who has access to resources. You can associate an access policy with a resource.
Buckets and objects are Amazon Simple Storage Service (Amazon S3) resources. By default, all Amazon S3 resources are private, which means that only the resource owner can access the resource. The resource owner is the AWS account that creates the resource. For more information about resource ownership and access policies, see Overview of Managing Accessin the Amazon Simple Storage Service Developer Guide.
Bucket access permissions specify which users are allowed access to the objects in a bucket and which types of access they have. Object access permissions specify which users are allowed access to the object and which types of access they have. For example, one user might have only read permission, while another might have read and write permissions.
Bucket and object permissions are independent of each other. An object does not inherit the permissions from its bucket. For example, if you create a bucket and grant write access to a user, you will not be able to access that user’s objects unless the user explicitly grants you access.
To grant access to your buckets and objects to other AWS accounts and to the general public, you use resource-based access policies called access control lists (ACLs).
bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that grants other AWS accounts or IAM users access to an S3 bucket. Bucket policies supplement, and in many cases, replace ACL-based access policies. For more information on using IAM with Amazon S3, see Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
For more in-depth information about managing access permissions, see Introduction to Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
This section also explains how to use the Amazon S3 console to add a cross-origin resource sharing (CORS) configuration to an S3 bucket. CORS allows client web applications that are loaded in one domain to interact with resources in another domain.
Topics

This set of Interview Questions & Answers focuses on “Amazon S3 – Simple Storage Services”.
1. Which of the following is a method for bidding on unused EC2 capacity based on the current spot price ?
a) On-Demand Instance
b) Reserved Instances
c) Spot Instance
d) All of the mentioned
View Answer
Answer:c
Explanation:This feature offers a significantly lower price, but it varies over time or may not be available when there is no excess capacity.
2. Point out the wrong statement:
a) The standard instances are not suitable for standard server applications
b) High memory instances are useful for large data throughput applications such as SQL Server databases and data caching and retrieval
c) FPS is exposed as an API that sorts transactions into packages called Quick Starts that makes it easy to implement
d) None of the mentioned
View Answer
Answer:a
Explanation:The standard instances are deemed to be suitable for standard server applications.
3. Which of the following instance has hourly rate with no long-term commitment ?
a) On-Demand Instance
b) Reserved Instances
c) Spot Instance
d) All of the mentioned
View Answer
Answer:a
Explanation:Pricing varies by zone, instance, and pricing model.
4. Which of the following is a batch processing application ?
a) IBM sMash
b) IBM WebSphere Application Server
c) Condor
d) Windows Media Server
View Answer
Answer:c
Explanation:Condor is a powerful, distributed batch-processing system that lets you use otherwise idle CPU cycles in a cluster of workstations.
5. Point out the correct statement:
a) Security can be set through passwords, Kerberos tickets, or certificates
b) Secure access to your EC2 AMIs is controlled by passwords, Kerberos, and 509 Certificates
c) Most of the system image templates that Amazon AWS offers are based on Red Hat Linux
d) All of the mentioned
View Answer
Answer:d
Explanation:Hundreds of free and paid AMIs can be found on AWS.
6. How many EC2 service zones or regions exist ?
a) 1
b) 2
c) 3
d) 4
View Answer
Answer:d
Explanation:There are four different EC2 service zones or regions.
7. Amazon ______ cloud-based storage system allows you to store data objects ranging in size from 1 byte up to 5GB.
a) S1
b) S2
c) S3
d) S4
View Answer
Answer:c
Explanation:In S3, storage containers are referred to as buckets.
8. Which of the following can be done with S3 buckets through the SOAP and REST APIs ?
a) Upload new objects to a bucket and download them
b) Create, edit, or delete existing buckets
c) Specify where a bucket should be stored
d) All of the mentioned
View Answer
9. Which of the following operation retrieves the newest version of the object ?
a) PUT
b) GET
c) POST
d) COPY
View Answer
Answer:b
Explanation:Versioning also can be used for preserving data and for archiving purposes.
10. Which of the following statement is wrong about Amazon S3 ?
a) Amazon S3 is highly reliable
b) Amazon S3 provides large quantities of reliable storage that is highly protected
c) Amazon S3 is highly available
d) None of the mentioned
View Answer
Answer:c
Explanation:S3 excels in applications where storage is archival in nature.
Amazon S3 is designed as a complete storage platform. Consider the ownership value included with every GB.
Simplicity. Amazon S3 is built for simplicity, with a web-based management console, mobile app, and full REST APIs and SDKs for easy integration with third party technologies.
Durability. Amazon S3 is available in regions around the world, and includes geographic redundancy within each region as well as the option to replicate across regions. In addition, multiple versions of an object may be preserved for point-in-time recovery.
Scalability. Customers around the world depend on Amazon S3 to safeguard trillions of objects every day. Costs grow and shrink on demand, and global deployments can be done in minutes. Industries like financial services, healthcare, media, and entertainment use it to build big data, analytics, transcoding, and archive applications.
Security. Amazon S3 supports data transfer over SSL and automatic encryption of your data once it is uploaded. You can also configure bucket policies to manage object permissions and control access to your data using AWS Identity and Access Management (IAM).
Broad integration with other AWS services for security (IAM and KMS), alerting (CloudWatch, CloudTrail and Event Notifications), computing (Lambda), and database (EMR, Redshift), designed to integrate directly with Amazon S3.
Cloud Data Migration options. AWS storage includes multiple specialized methods to help you get data into and out of the cloud.
Enterprise-class Storage Management. S3 Storage Management features allow you to take a data-driven approach to storage optimization, data security, and management efficiency. 

Amazon S3 Storage Management features allow customers to take a data-driven approach to storage optimization, compliance, and management efficiency. These features work together to help improve workload performance, facilitate compliance, streamline business process workflows, and enable more intelligent storage tiering to optimize storage costs and performance.
Amazon S3 is accessed simply through the S3 Console, SDKs, or ISV integration. S3 is supported by the AWS SDKs for Java, PHP, .NET, Python, Node.js, Ruby, and the AWS Mobile SDK. The SDK libraries wrap the underlying REST API, simplifying your programming tasks.

Amazon provides multiple options for cloud data migration, and makes it simple and cost-effective for you to move large volumes of data out of Amazon S3. Customers can choose from network-optimized, physical disk-based, or third-party connector methods for transfer into or out of S3.

In addition to S3 Standard, there is a lower-cost Standard - Infrequent Access option for infrequently accessed data, and Amazon Glacier for archiving cold data at the lowest possible cost.

Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Your data is redundantly stored across multiple facilities and multiple devices in each facility.

Amazon S3 provides several mechanisms to control and monitor who can access your data as well as how, when, and where they can access it. VPC endpoints allow you to create a secure connection without a gateway or NAT instances.

AWS S3 Create Step by Step

Create a Bucket

Now that you've signed up for Amazon S3, you're ready to create a bucket using the AWS Management Console. Every object in Amazon S3 is stored in a bucket. Before you can store data in Amazon S3, you must create a bucket.
Note
You are not charged for creating a bucket; you are charged only for storing objects in the bucket and for transferring objects in and out of the bucket. The charges you will incur through following the examples in this guide are minimal (less than $1). For more information about storage charges, see Amazon S3 Pricing.
To create an S3 bucket
  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
  3. In the Bucket name field, type a unique DNS-compliant name for your new bucket. (The example screen shot uses the bucket name admin-created. You cannot use this name because S3 bucket names must be unique.) Create your own bucket name using the follow naming guidelines:
    • The name must be unique across all existing bucket names in Amazon S3.
    • After you create the bucket you cannot change the name, so choose wisely.
    • Choose a bucket name that reflects the objects in the bucket because the bucket name is visible in the URL that points to the objects that you're going to put in your bucket.
    For information about naming buckets, see Rules for Bucket Naming in the Amazon Simple Storage Service Developer Guide.
  4. For Region, choose US West (Oregon) as the region where you want the bucket to reside.
  5. Choose Create.
You've created a bucket in Amazon S3.

Add an Object to a Bucket

Now that you've created a bucket, you're ready to add an object to it. An object can be any kind of file: a text file, a photo, a video, and so on.
To upload an object to a bucket
  1. In the Bucket name list, choose the name of the bucket that you want to upload your object to.
  2. Choose Upload.
    1. Or you can choose Get started.
  3. In the Upload dialog box, choose Add files to choose the file to upload.
  4. Choose a file to upload, and then choose Open.
  5. Choose Upload.

View an Object

Now that you've added an object to a bucket, you can view information about your object and download the object to your local computer.
To download an object from a bucket
  1. In the Bucket name list, choose the name of the bucket that you created.
  2. In the Name list, select the check box next to the object that you uploaded, and then choose Download on the object overview panel.

Move an Object

So far you've added an object to a bucket and downloaded the object. Now we create a folder and copy the object into the folder.
To copy an object
  1. In the Bucket name list, choose the name of the bucket that you created.
  2. Choose Create Folder, type favorite-pics for the folder name, and then choose Save.
  3. In the Name list, select the check box next to the object that you want to copy, choose More, and then choose Copy.
  4. In the Name list, choose the name of the folder favorite-pics.
  5. Choose More, and then choose Paste.
    1. Choose Paste.

Delete an Object and Bucket

If you no longer need to store the object that you uploaded and made a copy of while going through this guide, you should delete the objects to prevent further charges.
You can delete the objects individually. Or you can empty a bucket, which deletes all the objects in the bucket without deleting the bucket.
You can also delete a bucket and all the objects contained in the bucket. However, if you want to continue to use the same bucket name, don't delete the bucket. We recommend that you empty the bucket and keep it. After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, it might take some time before the name can be reused and some other account could create a bucket with that name before you do.
To delete an object from a bucket
  1. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
  2. In the Name list, select the check box next to the object that you want to delete, choose More, and then choose Delete.
  3. In the Delete objects dialog box, verify that the name of the object you selected for deletion is listed, and then choose Delete.
You can empty a bucket, which deletes all the objects in the bucket without deleting the bucket.
To empty a bucket
  1. In the Bucket name list, choose the bucket icon next to the name of the bucket that you want to empty and then choose Empty bucket.
  2. In the Empty bucket dialog box, type the name of the bucket for confirmation and then choose Confirm.
You can delete a bucket and all the objects contained in the bucket.
Important
If you want to continue to use the same bucket name, don't delete the bucket. We recommend that you empty the bucket and keep it. After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons.
To delete a bucket
  1. In the Bucket name list, choose the bucket icon next to the name of the bucket that you want to delete and then choose Delete bucket.
  2. In the Delete bucket dialog box, type the name of the bucket for delete confirmation and then choose Confirm.

Step 1: Create an Amazon S3 Bucket

You must first create an Amazon S3 bucket. You can do this directly by using the Amazon S3 console, API, or CLI, but a simpler way to create resources is often to use a AWS CloudFormation template. The following template creates an Amazon S3 bucket for this example and sets up instance profile with an IAM role that grants unrestricted access to the bucket. You can then use a layer setting to attach the instance profile to the stack's application server instances, which allows the application to access the bucket, as described later. The usefulness of instance profiles isn't limited to Amazon S3; they are valuable for integrating a variety of AWS services.
Copy
{ "AWSTemplateFormatVersion" : "2010-09-09", "Resources" : { "AppServerRootRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Path": "/" } }, "AppServerRolePolicies": { "Type": "AWS::IAM::Policy", "Properties": { "PolicyName": "AppServerS3Perms", "PolicyDocument": { "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": { "Fn::Join" : ["", [ "arn:aws:s3:::", { "Ref" : "AppBucket" } , "/*" ] ] } } ] }, "Roles": [ { "Ref": "AppServerRootRole" } ] } }, "AppServerInstanceProfile": { "Type": "AWS::IAM::InstanceProfile", "Properties": { "Path": "/", "Roles": [ { "Ref": "AppServerRootRole" } ] } }, "AppBucket" : { "Type" : "AWS::S3::Bucket" } }, "Outputs" : { "BucketName" : { "Value" : { "Ref" : "AppBucket" } }, "InstanceProfileName" : { "Value" : { "Ref" : "AppServerInstanceProfile" } } } }
Several things happen when you launch the template:
  • The AWS::S3::Bucket resource creates an Amazon S3 bucket.
  • The AWS::IAM::InstanceProfile resource creates an instance profile that will be assigned to the application server instances.
  • The AWS::IAM::Role resource creates the instance profile's role.
  • The AWS::IAM::Policy resource sets the role's permissions to allow unrestricted access to Amazon S3 buckets.
  • The Outputs section displays the bucket and instance profile names in AWS CloudFormation console after you have launched the template.
    You will need these values to set up your stack and app.
For more information on how to create AWS CloudFormation templates, see Learn Template Basics.
To create the Amazon S3 bucket
  1. Copy the example template to a text file on your system.
    This example assumes that the file is named appserver.template.
  2. Open the AWS CloudFormation console and click Create Stack.
  3. In the Stack Name box, enter the stack name.
    This example assumes that the name is AppServer.
  4. Click Upload template file, click Browse, select the appserver.template file that you created in Step 1, and click Next Step.
  5. On the Specify Parameters page, select I acknowledge that this template may create IAM resources, then click Next Step on each page of the wizard until you reach the end. Click Create.
  6. After the AppServer stack reaches CREATE_COMPLETE status, select it and click itsOutputs tab.
    You might need to click refresh a few times to update the status.
  7. On the Outputs tab, record the BucketName and InstanceProfileName values for later use.
Note
AWS CloudFormation uses the term stack to refer to the collection of resources that are created from a template; it is not the same as an AWS OpsWorks Stacks stack.

Step 2: Create a PHP App Server Stack

The stack consists of two layers, PHP App Server and MySQL, each with one instance. The application stores photos on an Amazon S3 bucket, but uses the MySQL instance as a back-end data store to hold metadata for each photo.
To create the stack
  1. Create a new stack—named PhotoSite for this example—and add a PHP App Server layer. You can use the default settings for both. For more information, see Create a New Stack and Creating an OpsWorks Layer .
  2. On the Layers page, for PHP App Server, click Security and then click Edit.
  3. In the Layer Profile section, select the instance profile name that you recorded earlier, after launching the AppServer AWS CloudFormation stack. It will be something like AppServer-AppServerInstanceProfile-1Q3KD0DNMGB90. AWS OpsWorks Stacks assigns this profile to all of the layer's Amazon EC2 instances, which grants permission to access your Amazon S3 bucket to applications running on the layer's instances .
  4. Add an instance to the PHP App Server layer and start it. For more information on how to add and start instances, see Adding an Instance to a Layer.
  5. Add a MySQL layer to the stack, add an instance, and start it. You can use default settings for both the layer and instance. In particular, the MySQL instance doesn't need to access the Amazon S3 bucket, so it can use the standard AWS OpsWorks Stacks instance profile, which is selected by default.

Step 3: Create and Deploy a Custom Cookbook

The stack is not quite ready yet:
  • Your application needs some information to access to the MySQL database server and the Amazon S3 bucket, such as the database host name and the Amazon S3 bucket name .
  • You need to set up a database in the MySQL database server and create a table to hold the photos' metadata.
You could handle these tasks manually, but a better approach is to implement Chef recipeand have AWS OpsWorks Stacks run the recipe automatically on the appropriate instances. Chef recipes are specialized Ruby applications that AWS OpsWorks Stacks uses to perform tasks on instances such as installing packages or creating configuration files. They are packaged in a cookbook, which can contain multiple recipes and related files such as templates for configuration files. The cookbook is placed in a repository such as GitHub, and must have a standard directory structure. If you don't yet have a custom cookbook repository, see Cookbook Repositories for information on how to set one up.
For this example, the cookbook has been implemented for you and is stored in a public GitHub repository. The cookbook contains two recipes, appsetup.rb and dbsetup.rb, and a template file, db-connect.php.erb.
The appsetup.rb recipe creates a configuration file that contains the information that the application needs to access the database and the Amazon S3 bucket. It is basically a lightly modified version of the appsetup.rb recipe described in Connect the Application to the Database; see that topic for a detailed description. The primary difference is the variables that are passed to the template, which represent the access information.
Copy
variables( :host => (deploy[:database][:host] rescue nil), :user => (deploy[:database][:username] rescue nil), :password => (deploy[:database][:password] rescue nil), :db => (deploy[:database][:database] rescue nil), :table => (node[:photoapp][:dbtable] rescue nil), :s3bucket => (node[:photobucket] rescue nil) )
The first four attributes define database connection settings, and are automatically defined by AWS OpsWorks Stacks when you create the MySQL instance.
There are two differences between these variables and the ones in the original recipe:
  • Like the original recipe, the table variable represents the name of the database table that is created by dbsetup.rb, and is set to the value of an attribute that is defined in the cookbook's attributes file.
    However, the attribute has a different name: [:photoapp][:dbtable].
  • The s3bucket variable is specific to this example and is set to the value of an attribute that represents the Amazon S3 bucket name, [:photobucket].
    [:photobucket] is defined by using custom JSON, as described later. For more information on attributes, see Attributes
For more information on attributes, see Attributes.
The dbsetup.rb recipe sets up a database table to hold each photo's metadata. It basically is a lightly modified version of the dbsetup.rb recipe described in Set Up the Database; see that topic for a detailed description.
Copy
node[:deploy].each do |app_name, deploy| execute "mysql-create-table" do command "/usr/bin/mysql -u#{deploy[:database][:username]} -p#{deploy[:database][:password]} #{deploy[:database][:database]} -e'CREATE TABLE #{node[:photoapp][:dbtable]}( id INT UNSIGNED NOT NULL AUTO_INCREMENT, url VARCHAR(255) NOT NULL, caption VARCHAR(255), PRIMARY KEY (id) )'" not_if "/usr/bin/mysql -u#{deploy[:database][:username]} -p#{deploy[:database][:password]} #{deploy[:database][:database]} -e'SHOW TABLES' | grep #{node[:photoapp][:dbtable]}" action :run end end
The only difference between this example and the original recipe is the database schema, which has three columns that contain the ID, URL, and caption of each photo that is stored on the Amazon S3 bucket.
The recipes are already implemented, so all you need to do is deploy the photoapp cookbook to each instance's cookbook cache. AWS OpsWorks Stacks will then run the cached recipes when the appropriate lifecycle event occurs, as described later.
To deploy the photoapp cookbook
  1. On the AWS OpsWorks Stacks Stack page, click Stack Settings and then Edit.
  2. In the Configuration Management section:
    • Set Use custom Chef cookbooks to Yes.
    • Set Repository type to Git.
    • Set Repository URL to git://github.com/amazonwebservices/opsworks-example-cookbooks.git.
  3. On the Stack page, click Run Command, select the Update Custom Cookbooks stack command, and click Update Custom Cookbooks to install the new cookbook in the instances' cookbook caches.

Step 4: Assign the Recipes to LifeCycle Events

You can run custom recipes manually, but the best approach is usually to have AWS OpsWorks Stacks run them automatically. Every layer has a set of built-in recipes assigned to each of five lifecycle events—Setup, Configure, Deploy, Undeploy, and Shutdown—. Each time an event occurs on an instance, AWS OpsWorks Stacks runs the associated recipes for each of the instance's layers, which handle the required tasks. For example, when an instance finishes booting, AWS OpsWorks Stacks triggers a Setup event to run the Setup recipes, which typically handle tasks such as installing and configuring packages .
You can have AWS OpsWorks Stacks run custom recipes on a layer's instances by assigning each recipe to the appropriate lifecycle event. AWS OpsWorks Stacks will run any custom recipes after the layer's built-in recipes have finished. For this example, assign appsetup.rbto the PHP App Server layer's Deploy event and dbsetup.rb to the MySQL layer's Deploy event. AWS OpsWorks Stacks will then run the recipes on the associated layer's instances during startup, after the built-in Setup recipes have finished, and every time you deploy an app, after the built Deploy recipes have finished. For more information, see Automatically Running Recipes.
To assign custom recipes to the layer's Deploy event
  1. On the AWS OpsWorks Stacks Layers page, for the PHP App Server click Recipes and then click Edit.
  2. Under Custom Chef Recipes, add the recipe name to the deploy event and click +. The name must be in the Chef cookbookname::recipename format, where recipenamedoes not include the .rb extension. For this example, you enter photoapp::appsetup. Then click Save to update the layer configuration.
  3. On the Layers page, click edit in the MySQL layer's Actions column.
  4. Add photoapp::dbsetup to the layer's Deploy event and save the new configuration.

Step 5: Add Access Information to the Stack Configuration and Deployment Attributes

The appsetup.rb recipe depends on data from the AWS OpsWorks Stacks stack configuration and deployment attributes, which are installed on each instance and contain detailed information about the stack and any deployed apps. The object's deploy attributes have the following structure, which is displayed for convenience as JSON:
Copy
{ ... "deploy": { "app1": { "application" : "short_name", ... } "app2": { ... } ... } }
The deploy node contains an attribute for each deployed app that is named with the app's short name. Each app attribute contains a set of attributes that define the app's configuration, such as the document root and app type. For a list of the deploy attributes, see deploy Attributes. You can represent stack configuration and deployment attribute values in your recipes by using Chef attribute syntax. For example,[:deploy][:app1][:application] represents the app1 app's short name.
The custom recipes depend on several stack configuration and deployment attributes that represent database and Amazon S3 access information:
  • The database connection attributes, such as [:deploy][:database][:host], are defined by AWS OpsWorks Stacks when it creates the MySQL layer.
  • The table name attribute, [:photoapp][:dbtable], is defined in the custom cookbook's attributes file, and is set to foto.
  • You must define the bucket name attribute, [:photobucket], by using custom JSON to add the attribute to the stack configuration and deployment attributes.
To define the Amazon S3 bucket name attribute
  1. On the AWS OpsWorks Stacks Stack page, click Stack Settings and then Edit.
  2. In the Configuration Management section, add access information to the Custom Chef JSON box. It should look something like the following:
    Copy
    { "photobucket" : "yourbucketname" }
    Replace yourbucketname with the bucket name that you recorded in Step 1: Create an Amazon S3 Bucket.
AWS OpsWorks Stacks merges the custom JSON into the stack configuration and deployment attributes before it installs them on the stack's instances; appsetup.rb can then obtain the bucket name from the [:photobucket] attribute. If you want to change the bucket, you don't need to touch the recipe; you can just override the attribute to provide a new bucket name.

Step 6: Deploy and Run PhotoApp

For this example, the application has also been implemented for you and is stored in a public GitHub repository. You just need to add the app to the stack, deploy it to the application servers, and run it.
To add the app to the stack and deploy it to the application servers
  1. Open the Apps page and click Add an app.
  2. On the Add App page, do the following:
    • Set Name to PhotoApp.
    • Set App type to PHP.
    • Set Document root to web.
    • Set Repository type to Git.
    • Set Repository URL to git://github.com/awslabs/opsworks-demo-php-photo-share-app.git.
    • Click Add App to accept the defaults for the other settings.
  3. On the Apps page, click deploy in the PhotoApp app's Actions column
  4. Accept the defaults and click Deploy to deploy the app to the server.
To run PhotoApp, go to the Instances page and click the PHP App Server instance's public IP address.
You should see the following user interface. Click Add a Photo to store a photo on the Amazon S3 bucket and the metadata in the back-end data store.

Using AWS CodePipeline with AWS OpsWorks Stacks

AWS CodePipeline lets you create continuous delivery pipelines that track code changes from sources such as AWS CodeCommit, Amazon Simple Storage Service (Amazon S3), or GitHub. You can use AWS CodePipeline to automate the release of your Chef cookbooks and application code to AWS OpsWorks Stacks, on Chef 11.10, Chef 12, and Chef 12.2 stacks. Examples in this section describe how to create and use a simple pipeline from AWS CodePipeline as a deployment tool for code that you run on AWS OpsWorks Stacks layers.
Note
AWS CodePipeline and AWS OpsWorks Stacks integration is not supported for deploying to Chef 11.4 and older stacks.
Related Topics

Step 1: Create an Amazon S3 Bucket

Important
This quick start guide uses a new version of the AWS Management Console that is currently in preview release and is subject to change.
First, you need to create an Amazon S3 bucket where you will store your objects.
  1. Sign in to the preview version of the AWS Management Console.
  2. Under Storage & Content Delivery, choose S3 to open the Amazon S3 console.
    If you are using the Show All Services view, your screen looks like this:
    If you are using the Show Categories view, your screen looks like this with Storage & Content Delivery expanded:
  3. From the Amazon S3 console dashboard, choose Create Bucket.
  4. In Create a Bucket, type a bucket name in Bucket Name.
    The bucket name you choose must be globally unique across all existing bucket names in Amazon S3 (that is, across all AWS customers). For more information, see Bucket Restrictions and Limitations.
  5. In Region, choose Oregon.
  6. Choose Create.
    When Amazon S3 successfully creates your bucket, the console displays your empty bucket in the Buckets pane.

Step 2: Upload a File to Your Amazon S3 Bucket

Now that you've created a bucket, you're ready to add an object to it. An object can be any kind of file: a document, a photo, a video, a music file, or other file type.
  1. In the Amazon S3 console, choose the bucket where you want to upload an object, choose Upload, and then choose Add Files.
  2. In the file selection dialog box, find the file that you want to upload, choose it, choose Open, and then choose Start Upload.
    You can watch the progress of the upload in the Transfer pane.

Step 3: Retrieve a File from Your Amazon S3 Bucket

Now that you've added an object to a bucket, you can open and view it in a browser. You can also download the object to your local computer.
  1. In the Amazon S3 console, choose your S3 bucket, choose the file that you want to open or download, choose Actions, and then choose Open or Download.
  2. If you are downloading an object, specify where you want to save it.
    The procedure for saving the object depends on the browser and operating system that you are using.

Step 4: Delete a File From Your Amazon S3 Bucket

If you no longer need to store the file you've uploaded to your Amazon S3 bucket, you can delete it.
  • Within your S3 bucket, select the file that you want to delete, choose Actions, and then choose Delete.
    In the confirmation message, choose OK.








Amazon S3 Step by Step








The following gets you from the state of not having an S3 account to getting Duplicator Pro backing up on S3 as quickly as possible. If you already have an S3 account you should be able to adapt the procedure for your needs.

1. Create an Amazon Web Services (AWS) Account

Follow this link to create an account.

2. Create a Bucket

A bucket is a storage area that you use to group data together. You can think of it like a virtual drive.  For our purposes we’ll assume you don’t already have a bucket and the bucket you’ll create will be dedicated to Duplicator Pro backups. This doesn’t have to be the case however – you can use existing buckets and put different types of data in a single bucket.
  1. When at the AWS console service list, click S3 in the Storage & Content Delivery area.
    1. Note: You can return to the service list at any time by clicking the orange box in the upper left-hand corner.
  2. Click Create Bucket.
  3. Set the name to something you’ll remember and is unique.  It’s recommended you name the bucket to be something similar to a domain name. For instance backups.mydomain.com.
  4. For the region, select the closest location to your web server.

2. Create a User

  1. Go back to the AWS console service list and click Identity and Access Management in the Security & Identity area (IAM).
  2. Click the Users link
  3. Click the Create New Users button and add a new user with the name of your choice.

3. Create a Security Policy

You’ll now need to create a security policy which you’ll later assign to the user you just created. The policy describes what the user can do. In this case we’ll be creating a policy that says a user can fully access the bucket we created earlier.
  1. Click the Policies link
  2. Click the Create Policy button.
    1. Note: If this is the first time you’ve ever been to the Policies page you may need to click Start Now first.
  3. Select Create Your Own Policy
  4. Copy and paste the policy defined on this page into the Policy Document area and name the policy “Duplicator Backup Policy” (You can name this anything you want.)
    1. Note: Be sure to to replace ##BUCKETNAME## with the name of your bucket as the Duplicator policy page instructs.

4. Assign User Privileges

Now that we have a policy, we’ll assign it to the user we created earlier.
  1. Go back to the user list.
  2. Click the name of user in the user list.
  3. Click the Permissions tab.
  4. Click the Attach Policy button.
  5. Use the filter to find the policy you created earlier and click the Attach Policy button.

5. Create a User Access Key

  1. Go back to the User list.
  2. Click the name of user in the user list.
  3. Click the Create Access Key button.
  4. Either: Download the credentials OR copy/paste the Access Key ID and Secret Access Key into a temporary text file.  We’ll need these two values when setting up the storage endpoint in the next step.

6. Configure a Storage Endpoint in Duplicator Pro

Things have been configured in S3, now we need to configure the Duplicator Pro side.
  1. Create a new Storage Endpoint
  2. Choose Amazon S3 for the Type
  3. Copy/Paste the Access Key from your text file or the downloaded file from Step 5 above into the Access Key field.
  4. Copy/Paste the Secret Key from your text file or the downloaded file from Step 5 above into the Secret Key field.
  5. Fill in the bucket and the region.
  6. Unless you have a strong preference you can leave the Storage Folder and Storage Class fields alone.

7. Test Storage Endpoint

Now that things are configured you can see if the configuration worked properly by clicking the Test S3 Connectionbutton.  If it reports success you are good to go. If not, go through the previous steps and make sure you didn’t miss anything.

Further Reading

The following articles can help you understand S3 more in-depth:

Comments

  1. Nice article, users are attracted when they see your post thanks for posting keep updating
    aws blog cmnt: AWS Online Course

    ReplyDelete
  2. It's very useful blog post with inforamtive and insightful content and i had good experience with this information. We, at the CRS info solutions , help candidates in acquiring certificates, master interview questions, and prepare brilliant resumes. salesforce training in bangalore who are offering good certificaiton assistance. I would say salesforce training is a best way to get certified on crm.

    ReplyDelete
  3. Thank you for this wonderful blog on AWS. Its really an amazing blog to read. It has lots of information regarding AWS. Students & working professionals can go through this blog as it would really help them.

    AWS Training in Pune

    ReplyDelete
  4. Excellent blog on aws simply awesome.Nice blog post on aws
    AWS Training in Chennai

    ReplyDelete
  5. I’d really like to appreciate the efforts you get with writing this post. Thanks for sharing.
    AWS course in pune

    ReplyDelete
  6. This article is a creative one and the concept is good to enhance our knowledge. Waiting for more updates.
    AWS Online Training
    Data Science Online Course

    ReplyDelete
  7. Valuable blog,Informative content...thanks for sharing, Waiting for the next update…
    TOEFL Coaching in Chennai
    TOEFL Classes in Chennai

    ReplyDelete
  8. Datamigration is the process of moving all the data from one source to another. This is very important because it reduces cost and is more practical than transferring data manually. However, it can be difficult because of the wide variety of formats and systems that data can be in. Many programs have been made with the goal of making data migrationeasier.

    ReplyDelete


  9. Amazing post.Thanks for sharing such a worthy. keep sharing like this.....
    importancae of aws and benefits
    Benefits of AWS Cloud

    ReplyDelete
  10. Really nice blog. thanks for sharing such a useful information.
    Kotlin Online Course

    ReplyDelete
  11. Great Post! This is very Informative & Useful Post. I got too much information from this post which I wanted for healthcare recruitment process outsourcing. Thanks for sharing such a helpful post. Keep Posting!!

    ReplyDelete
  12. This comment has been removed by the author.

    ReplyDelete
  13. Wow it is really wonderful and awesome thus it is very much useful for me to understand many concepts and helped me a lot. Otherwise if any One want to Make Genuine Experience Certificate with Complete Verification Support So Contact us-9599119376

    Genuine Fake Experience Certificate Provider in Hyderabad
    Best Genuine Fake Experience Certificate Provider in Pune

    ReplyDelete
  14. Thanks for sharing the best information and suggestions, I love your content, and they are very nice and very useful to us. Otherwise if any One want to Make Genuine Experience Certificate with Complete Verification Support So Contact us-9599119376.

    Experience Certificate Provider in Bangalore with Complete Verification
    Fill Your IT Career GAP by Genuine Experience Certificate by Certified Provider

    ReplyDelete
  15. Thank You, for sharing this blog,
    warehouse services

    ReplyDelete
  16. I really enjoyed while reading your article, the information you have mentioned in this post is really good. I am waiting for your upcoming post.

    Get AWS Certified in 6 weeks - Master the Cloud in 6 Weeks
    Complete Linux Training Course with Placement Support

    ReplyDelete
  17. Thank you for sharing information on Amazon Simple storage service, it is very useful for the beginners.
    AWS Training
    AWS Training in Hyderabad

    ReplyDelete
  18. This comment has been removed by the author.

    ReplyDelete
  19. Very Informative Blog. I am a regular reader of your blogs and in contact with so many students. In this blog you are solving a lot of confusing points for students and this blog is really very helpful for them. Keep sharing such informative content. For Industry Expert Trainers guidance and Guaranteed placement assistance for your secured future join Ducat today by enrolling on their newly designed aws cloud computing courses

    Apply Now
    Call Now on 7070905090 or Log in to www.ducatindia.com

    ReplyDelete
  20. Awesome Post!!! Thanks for sharing this great post with us.
    Why is Java Popular?
    How Popular is Java?

    ReplyDelete
  21. This post is so interactive and informative.keep update more information...
    dot net training in Tambaram
    Dot net training in Chennai


    ReplyDelete
  22. Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Well stated, you have furnished the best information that will help everyone.
    If you are searching for hosting services and want to know about hosting service in Europe then go for Europe VPS Server for better services and information.

    ReplyDelete
  23. Very informative blog. I am a regular reader of your blogs and am in contact with so many people. In this blog, you are solving a lot of confusing points for everyone and this blog is really very helpful for them. For developers and bloggers, this article is very beneficial. Keep sharing! For more information visit- Greece VPS Server

    ReplyDelete
  24. Thank you for sharing your post, it is very useful for us.
    Are you looking for server hosting Visit us for exploring available Malaysia Dedicated Server at the lowest price?

    ReplyDelete
  25. Wow! amazing blog posting by you. Thank you very much. It information is very helpful in this blog related to Amazon Web Services. If you are looking for fastest and cheapest Netherlands VPS Hosting you can ask us for more details and services.

    ReplyDelete
  26. Amazing!!! Great blog. Thanks for Sharing. I am very much impressed with your blog. I have recently found an excellent hosting service provides in India whose faculty is exceptional and you can also try there services with Brazil Dedicated Server and enjoy the experience with this Best Hosting Service Provider.

    ReplyDelete
  27. Very well written article. Thanks for sharing Amazon web services for solution If you are looking for fastest and cheapest Netherlands VPS Server you can ask us for more details and services.

    ReplyDelete
  28. Nice post! I really enjoyed reading it. Keep up the good work!
    AWS classes in Pune

    ReplyDelete

Post a Comment

Popular posts from this blog

AWS Route 53 & Routing Policy

Amazon Route 53 You can use Amazon Route 53 to register new domains, transfer existing domains, route traffic for your domains to your AWS and external resources, and monitor the health of your resources. Amazon  Route 53  ( Route 53 ) is a scalable and highly available Domain Name System (DNS). It is part of Amazon.com's cloud computing platform, Amazon Web Services (AWS). The name is a reference to TCP or UDP port  53 , where DNS server requests are addressed. ...  Route 53's  servers are distributed throughout the world. DNS management If you already have a domain name, such as example.com, Route 53 can tell the Domain Name System (DNS) where on the Internet to find web servers, mail servers, and other resources for your domain. Learn More Traffic management Route 53 traffic flow provides a visual tool that you can use to create and update sophisticated routing policies to route end users to multiple endpoints for your application. Le...

Amazon EBS Elastic Block Store

     Amazon Elastic Block Store Amazon  Elastic Block Store  (Amazon  EBS ) provides persistent block storage volumes for use with Amazon EC2 instances in the  AWS  Cloud. Each Amazon  EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with  Amazon EC2  instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision. Amazon EBS is designed for application workloads that benefit from fine tu...