AWS Basics

AWS has more than a million customers in 190 countries around the world.Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of Regions, Availability Zones, and Local Zones.Region as a distinct geographical location where AWS services are made available.

Regions
  • It is a physical location around the world where AWS data centers are available. The regions are separated geographically, which provides data sovereignty.When you use an AWS service, you choose the region where you want to host the services. You can also choose multiple regions depending on where you want to store the data.
  • Each AWS Region consists of multiple, isolated, and physically separate Availability Zone (AZ's) within a geographic area.
  • Each  Each Region is completely independent. Each Amazon EC2 Region is designed to be isolated from the other Amazon EC2 Regions.
  • This achieves the greatest possible fault tolerance and stability.  When you view your resources, you see only the resources that are tied to the Region that you specified. This is because Regions are isolated from each other, and we don't automatically replicate resources across Regions.
Availability Zone
  • Each Region has multiple, isolated locations known as Availability Zones.Each Availability Zone is isolated, but the Availability Zones in a Region are connected through low-latency links.
  • An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region Group of Data centers.
  • When you launch an instance, you can select an Availability Zone or let us choose one for you. If you distribute your instances across multiple Availability Zones and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests. 
  • Each AZ is located in a different floodplain; power grids are designed in such a way that a natural calamity or disaster does not impact multiple AZs. The AZs have redundant power supplies that come via different power utility companies, plus backup generators to handle an extreme power failure. 
  • The latency between the AZs within a region is less than a single digit.As a result, you can synchronously replicate the data across the AZs. The biggest advantage of this is that you can design an application in such a way that it can run on multiple AZs, and since the data can be
    synchronously replicated within the AZs, in the case of a disaster taking one of the AZs down, there is no impact on your application. You can even architect an application in such a way that it automatically fails over between different AZs without any service interruption. You can choose which AZs you want to host your applications.

AWS Global Infrastructure Map
AWS now spans 76 Availability Zones within 24 geographic regions around the world, and has announced plans for nine more Availability Zones and three more AWS Regions in Indonesia, Japan, and Spain.


AWS Products And Services



Amazon Web Services: Important AWS Services — Part 1 | by Naina Chaturvedi  | Analytics Vidhya | Medium

COMPUTE
The AWS compute services include a variety of products and services that provide scalable computing capacity in the cloud. The compute services include both servers
and serverless configuration. The compute services also include the tools required for automatically scaling the resources and quickly deploying your applications on AWS. Let’s explore the products in the compute area.

Amazon Elastic Compute Cloud (EC2) includes the virtual servers, called instances, in the cloud. A customer can choose from more than 30 varieties of instances. Some of them are CPU intensive, some of them are memory intensive, some of them are accelerated computing optimized as in GPU optimized, some of them are storage optimized, some of them are input/output (I/O) instances, and some of them are general-purpose instances. Depending on the use case, the customer can choose from a variety of instance types. For example, if you are running a database workload that needs lots of memory, you can choose a memory-intensive instance, and if you are planning to run machine learning, you can choose an accelerated
computing instance.

Amazon EC2 Auto Scaling
Amazon EC2 Auto Scaling helps in automatically scaling the Amazon EC2 instances up and down as per the policies you define. Combining Amazon EC2 and Auto Scaling, you can create a high availability architecture. Amazon EC2 Auto Scaling also ensures that you are always running with the desired number of instances. If
for some reason an instance goes down, Amazon EC2 Auto Scaling quickly spins up a new instance. You can define Amazon EC2 Auto Scaling policies for various metrics and health checks. For example, you can set the CPU utilization metric to, say, 70 percent in Amazon EC2 Auto Scaling to add more servers to handle a load that exceeds that amount. Similarly, if a server is not healthy, you can use the health check metric of Amazon EC2 Auto Scaling to remove a server. There is no additional charge for using Amazon EC2 Auto Scaling. Amazon EC2 Auto Scaling integrates with Elastic Load Balancer.

AWS Lambda
AWS Lambda enables you to run code without provisioning or managing any servers or infrastructure.You can run any code for any kind of application or back-end service. You can also run code in response to event triggers such as Amazon S3 uploads, Amazon DynamoDB updates, Amazon Kinesis streams, Amazon API Gateway requests, and so on. The pricing for using AWS Lambda is simple. You pay only for the compute time when the code is getting executed; there is no charge when the code is not running. AWS Lambda scales automatically. Whenever you upload your code, AWS Lambda take cares of scaling the code automatically. When code is executed, the high availability is also taken care of automatically; in other words, the code is scaled with high availability as well.

Amazon EC2 Container Service
Amazon EC2 Container Service (ECS) allows you to run Docker containers on Amazon EC2 instances. Amazon ECS is scalable and is a performance container
management service. With Amazon ECS you don’t have to install, scale, and operate your own cluster management infrastructure. You can launch and manage Docker-enabled applications using application programming interface (API) calls. You can use the built-in scheduler, write your own scheduler, or use a third-party scheduler to meet business- or application- specific requirements. Amazon ECS integrates with
other services such as ELB and Amazon EBS. There are no separate charges for Amazon ECS; you pay only for the AWS resources used such as Amazon EC2 instances, Amazon Elastic Block Storage (EBS) volumes, and so on.

SECURITY AND COMPLIANCE
The security of the cloud is the highest priority for AWS.There are lots of safeguards at every layer in the AWS infrastructure to keep the data safe and help protect customer privacy. In addition, AWS provides lots of compliance programs in its infrastructure. In this section, you will learn about the products and services related to security and compliance.
 
AWS Identity and Access Management
AWS Identity and Access Management (IAM) is used to create users, groups, and roles. It is also used to manage and control access to AWS services and resources. AWS IAM can be federated with other systems, thereby allowing existing identities (user, groups, and roles) of your enterprise to access AWS resources.
   
What is an S3?
Amazon Simple Storage Service (Amazon S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. You can accomplish these tasks using the AWS Management Console, which is a simple and intuitive web interface. The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the multipart upload capability.

How data Organized in S3?
Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file.
  • To store a file in Amazon S3, you upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metadata.
  • Buckets are containers for objects. You can have one or more buckets. You can control access for each bucket, deciding who can create, delete, and list objects in it.
  • You can also choose the geographical Region where Amazon S3 will store the bucket and its contents and view access logs for the bucket and its objects.
  • Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and can be constructed to mimic hierarchical attributes.
How much does Amazon S3 cost?
With Amazon S3, you pay only for what you use. There is no minimum fee.We charge less where our costs are less. There is no Data Transfer charge for data transferred within the Amazon S3 AWS China (Beijing) Region or AWS China (Ningxia) Region via a COPY request. There is no Data Transfer charge for data transferred between Amazon EC2 and Amazon S3 within the AWS China (Beijing) Region or within the AWS China (Ningxia) Region. Data transferred between Amazon EC2 and Amazon S3 across two AWS Regions - i.e. between the Amazon EC2 AWS China (Ningxia) Region and Amazon S3 AWS China (Beijing) Region is charged at the Internet transfer rate specified on the pricing section of the billing console.
What is an S3 Glacier?
Amazon Simple Storage Service Glacier, that is Amazon S3 Glacier (S3 Glacier), is a storage service optimized for infrequently used data, or "cold data."
  • S3 Glacier is an extremely low-cost storage service that provides durable storage with security features for data archiving and backup.
  • With S3 Glacier, customers can store their data cost effectively for months, years, or even decades.
  • S3 Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and recovery, or time-consuming hardware migrations.
What is an EC2?
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

No comments:

Post a Comment