AWS Sample Architecture
AWS Architecture
A diagram is worth a thousand words. Architecture diagrams are a great way to communicate about your design, deployment and topology.
The AWS Architecture is designed to provide you with the necessary guidance and application architecture best practices to build highly scalable and reliable applications in the AWS cloud. These resources will help you understand the AWS platform, its services and features, and will provide architectural guidance for design and implementation of systems that run on the AWS infrastructure.
The following diagram shows an example architecture that uses the AWS resources mentioned in the previous post.
As an example, we’ll walk-through deployment of a simple web application. If you’re doing something else, you can adapt this example architecture to your specific situation. In this diagram, Amazon EC2 instances in a security group run the application and web server. The Amazon EC2 Security Group acts as an exterior firewall for the Amazon EC2 instances. An Auto Scaling group maintains a fleet of Amazon EC2 instances that can be automatically added to or removed in order to handle the presented load. This Auto Scaling group spans two Availability Zones to protect against potential failures in either of Availability Zones. To ensure that traffic is distributed evenly among the Amazon EC2 instances, an Elastic Load Balancer is associated with the Auto Scaling group. If the Auto Scaling group launches or terminates instances to respond to load changes, the Elastic Load Balancer automatically adjusts accordingly.
For a step-by-step walk-through of how to build out this architecture, see Getting Started. This particular walk-through will teach you how to accomplish the below following:
- Sign up for AWS.
- Launch, connect, and deploy Drupal to an Amazon EC2 instance.
- Create a Custom AMI.
- Set up an Elastic Load Balancer to distribute traffic across your Amazon EC2 instances.
- Scale your fleet of instances automatically using Auto Scaling.
- Monitor your AWS resources using Amazon CloudWatch.
- Clean up your AWS resources.
Interview Questions AWS Architect
If you're looking for AWS Architect Interview Questions, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research AWS Architect continue to grow at 50% compared to other IT jobs. So, You still have opportunity to move ahead in your career in AWS Certified Solutions Architect. Mindmajix offers advanced AWS Architect Interview Questions that helps you in cracking your interview & acquire dream career.
Q: How terminating and stopping an instance are the different processes
Instance performs a regular shutdown when it is stopped. It then performs transactions. As the entire EBS volumes remain present, it is possible to start the instance anytime again when you want. The best thing is when the instance remains in the stopped state, users don’t need to pay for that particular time.
Instance performs a regular shutdown when it is stopped. It then performs transactions. As the entire EBS volumes remain present, it is possible to start the instance anytime again when you want. The best thing is when the instance remains in the stopped state, users don’t need to pay for that particular time.
Upon termination, the instance performs a regular shutdown. After this, the Amazon EBS volumes start deleting. You can stop them from deleting simply by setting the “Delete on Termination” to false. Because the instance gets deleted, it is not possible to run it again in the future.
Q: At what value the instance’s tenancy attribute is to be set for running it on single-tenant hardware
It should be set to the Dedicated Instance for smoothly running it on single-tenant hardware. Others values are not valid for this operation.
It should be set to the Dedicated Instance for smoothly running it on single-tenant hardware. Others values are not valid for this operation.
Q: When there is a need to acquire costs with an EIP
EIP stands for Elastic Internet Protocol address. Costs are acquired with an EIP when the same is associated and allocated with a stopped instance. In case only one Elastic IP is there with the instance you are running, you will not be charged for it. However, in case the IP is attached to a stopped instance or doesn’t attach to any instance, you need to pay for it.
EIP stands for Elastic Internet Protocol address. Costs are acquired with an EIP when the same is associated and allocated with a stopped instance. In case only one Elastic IP is there with the instance you are running, you will not be charged for it. However, in case the IP is attached to a stopped instance or doesn’t attach to any instance, you need to pay for it.
Q: What is the difference between On-demand instance and a Spot Instance?
Spot instance is similar to bidding and the price of bidding is known as Spot price. Both Spot and on-demand instances are pricing models. In both of them, there is no commitment for the exact time from the user end. Without upfront payment, Spot instance can be used while the same is not possible in case of On-demand instance. It needs to be purchased first and the price is higher than spot instance.
Spot instance is similar to bidding and the price of bidding is known as Spot price. Both Spot and on-demand instances are pricing models. In both of them, there is no commitment for the exact time from the user end. Without upfront payment, Spot instance can be used while the same is not possible in case of On-demand instance. It needs to be purchased first and the price is higher than spot instance.
Q: Name the Instances types for which the Multi AZ-deployments are available
The Multi-AZ deployments are simply available for all the instances irrespective of their types and use.
The Multi-AZ deployments are simply available for all the instances irrespective of their types and use.
Q: When Instances are launched in the cluster placement group, what are the network performance parameters that can be expected?
Actually, it depends largely on the type of Instance, as well as on the specification of network performance. In case they are started in the placement group, you can expect following parameters
a. 20 Gbps in case of full duplex or when in multi-flow
b. Up to 10 Gbps in case of a single-flow
c. Outside the group, the traffic is limited to 5 Gbps.
Actually, it depends largely on the type of Instance, as well as on the specification of network performance. In case they are started in the placement group, you can expect following parameters
a. 20 Gbps in case of full duplex or when in multi-flow
b. Up to 10 Gbps in case of a single-flow
c. Outside the group, the traffic is limited to 5 Gbps.
Q: Which Instance can be used for deploying a 4-node cluster of Hadoop in Amazon Web Services?
It is possible to use i2.large or c4.8x large Instance for this. However, c.4bx needs better configuration on the PC. At some stages, you can simply launch the EMR for automatic configuration of the server for you. Data can be put into S3 and EMR is able to pick it from there. It will load your data in S3 again after processing it.
It is possible to use i2.large or c4.8x large Instance for this. However, c.4bx needs better configuration on the PC. At some stages, you can simply launch the EMR for automatic configuration of the server for you. Data can be put into S3 and EMR is able to pick it from there. It will load your data in S3 again after processing it.
Q: What do you know about an AMI?
AMI are generally considered as the templates for the virtual machines. While starting an instance, it is possible to select pre-baked AMI’s that AMI commonly have in them. However, not all the AMI’s are available to use free of cost. It is also possible to have a customized AMI and the most common reason to use the same is nothing but saving the space on Amazon Web Service. This is done in case a group of software is not required and AMI can simply be customized in that situation.
AMI are generally considered as the templates for the virtual machines. While starting an instance, it is possible to select pre-baked AMI’s that AMI commonly have in them. However, not all the AMI’s are available to use free of cost. It is also possible to have a customized AMI and the most common reason to use the same is nothing but saving the space on Amazon Web Service. This is done in case a group of software is not required and AMI can simply be customized in that situation.
Q: Tell us various parameters that you should consider while selecting the Availability Zone?
For this, there are various parameters that should be kept in mind. Some of them are performance, pricing, latency, as well as response time.
For this, there are various parameters that should be kept in mind. Some of them are performance, pricing, latency, as well as response time.
Q: What do you know about the private and the public address?
Well, the private address is directly correlated with the Instance and is sent back to EC2 only in case it is terminated or stopped. On the other side, public address is correlated in a similar manner with the Instance till it is terminated or stopped. It is possible to replace the public address with Elastic IP. This is done when a user wants it to stay with Instance as per the need.
Well, the private address is directly correlated with the Instance and is sent back to EC2 only in case it is terminated or stopped. On the other side, public address is correlated in a similar manner with the Instance till it is terminated or stopped. It is possible to replace the public address with Elastic IP. This is done when a user wants it to stay with Instance as per the need.
Q: Is it possible to run the multiple websites on EC2 server with one Elastic IP address?
No, it’s not possible. We need more than one elastic IP in such a case.
No, it’s not possible. We need more than one elastic IP in such a case.
Q: Name the practices available when it comes to securing the Amazon EC2?
This can be done through several practices. Review of the protocols in security group is to be monitored regularly and it is to be ensured that the principle of least is applicable over there. Next practice is using access management and AWS identity for controlling and securing the access. Access is to be restricted to hosts and networks that are trusted. In addition to this, only those permissions are opened which are required and not any other. It would also be good to disable password based logins for the instances.
This can be done through several practices. Review of the protocols in security group is to be monitored regularly and it is to be ensured that the principle of least is applicable over there. Next practice is using access management and AWS identity for controlling and securing the access. Access is to be restricted to hosts and networks that are trusted. In addition to this, only those permissions are opened which are required and not any other. It would also be good to disable password based logins for the instances.
Q: What are the states available in Processor State Control?
It contains two states and they are:
P-state- It has different levels starting from P0 to P15. P0 represents the highest frequency and P15 represents the lowest frequency.
C-State- Its levels are from C0 to C6 where C6 is the strongest state for the processor.
It is possible to customize these states in a few EC2 instances which enable users to customize processor as per need.
It contains two states and they are:
P-state- It has different levels starting from P0 to P15. P0 represents the highest frequency and P15 represents the lowest frequency.
C-State- Its levels are from C0 to C6 where C6 is the strongest state for the processor.
It is possible to customize these states in a few EC2 instances which enable users to customize processor as per need.
Q: Name the approach that restricts the access of third party software in Storage Service to S3 bucket named “Company Backup”?
There is a policy named custom IAM user policy that limits the S3 API in the bucket
There is a policy named custom IAM user policy that limits the S3 API in the bucket
Q: It is possible to use S3 with EC2 instances. How?
Yes, it’s possible if the instances are having root devices and they are supported by the instance storage. Amazon uses one of the very reliable, scalable, fast, as well inexpensive networks for hosting all their websites. With the help of S3, it is possible for the developers to get access to the same network. There are tools available in AMI’s that users can consider when it comes to executing systems in EC2. The files can simply be moved between EC2 and S3.
Yes, it’s possible if the instances are having root devices and they are supported by the instance storage. Amazon uses one of the very reliable, scalable, fast, as well inexpensive networks for hosting all their websites. With the help of S3, it is possible for the developers to get access to the same network. There are tools available in AMI’s that users can consider when it comes to executing systems in EC2. The files can simply be moved between EC2 and S3.
Q: Is it possible to speed up data transfer in Snowball? How?
Yes, it’s possible. There are certain methods for this. First is simply copying from different hosts to the same Snowball. Another method is by creating a group of smaller files. This is helpful as it cut down the encryption issues. Data transfer can also be enhanced by simply copy operations again and again at the same time provided the workstation is capable to bear the load.
Yes, it’s possible. There are certain methods for this. First is simply copying from different hosts to the same Snowball. Another method is by creating a group of smaller files. This is helpful as it cut down the encryption issues. Data transfer can also be enhanced by simply copy operations again and again at the same time provided the workstation is capable to bear the load.
Q: Name the method that you will use for moving the data to a very long distance?
Amazon Transfer Acceleration is a good option. There are other options such as Snowball but the same doesn’t support data transfer over a very long distance such as among continents. Amazon Transfer Acceleration is the best option because it simply throttles the data with the help of network channels that are optimized and assures very fast data transfer speed.
Amazon Transfer Acceleration is a good option. There are other options such as Snowball but the same doesn’t support data transfer over a very long distance such as among continents. Amazon Transfer Acceleration is the best option because it simply throttles the data with the help of network channels that are optimized and assures very fast data transfer speed.
Q: What will happen if you launch the instances in Amazon VPC?
This is a common approach that is considered when it comes to launching EC2 instances. Each instance will be having a default IP addressed if the instances are launched in Amazon VPC. This approach is also considered when you need to connect cloud resources with the data centers.
This is a common approach that is considered when it comes to launching EC2 instances. Each instance will be having a default IP addressed if the instances are launched in Amazon VPC. This approach is also considered when you need to connect cloud resources with the data centers.
Q: Is it possible to establish a connection between Amazon cloud and a corporate data center? How?
Yes, it’s possible. For this, first, a Virtual Private Network is to be established between the Virtual private cloudand the organization’s network. After this, the connection can simply be created and data can be accessed reliably.
Yes, it’s possible. For this, first, a Virtual Private Network is to be established between the Virtual private cloudand the organization’s network. After this, the connection can simply be created and data can be accessed reliably.
Q: Why is it not possible to change or modify the private IP address of an EC2 instance when it is running?
This is because the private IP remains with the instance permanently or through the life cycle. Thus it cannot be changed or modified. However, it is possible to change the secondary private address.
This is because the private IP remains with the instance permanently or through the life cycle. Thus it cannot be changed or modified. However, it is possible to change the secondary private address.
Q: Why are subnets required to be created?
They are needed to utilize the network with a large number of hosts in a reliable manner. Of course, it’s a daunting task to manage them all. By dividing the network into smaller subnets, it can be made simpler and the chances of errors or data loss can be eliminated up to an excellent extent.
They are needed to utilize the network with a large number of hosts in a reliable manner. Of course, it’s a daunting task to manage them all. By dividing the network into smaller subnets, it can be made simpler and the chances of errors or data loss can be eliminated up to an excellent extent.
Q: Is it possible to attach multiple subnets to a route table?
Yes, it’s possible. They are generally considered when it comes to routing the network packets. Actually, when a subnet has several route tables, it can create confusion about the destination of these packets. It is because of no other reason than this there should be only one route table in a subnet. The route table can have unlimited records and therefore it is possible to attach multiple subnets to a route table.
Yes, it’s possible. They are generally considered when it comes to routing the network packets. Actually, when a subnet has several route tables, it can create confusion about the destination of these packets. It is because of no other reason than this there should be only one route table in a subnet. The route table can have unlimited records and therefore it is possible to attach multiple subnets to a route table.
Q: What happens if the AWS Direct Connect fails to perform its function?
It is recommended to backup the Direct Connect as in case of power failure you can lose everything. Enabling BFD i.e. Bi-directional Forwarding Detection can avoid the issues. In case no backup is there, VPC traffic would be dropped and you need to start everything from the initial point again.
It is recommended to backup the Direct Connect as in case of power failure you can lose everything. Enabling BFD i.e. Bi-directional Forwarding Detection can avoid the issues. In case no backup is there, VPC traffic would be dropped and you need to start everything from the initial point again.
Q: What will happen if the content is absent in CloudFront and a request is made?
CloudFront sent the content from the primary server directly to the cache memory of the edge location. As it’s a content delivery system, it tries to cut down the latency and that is why it will happen. If the operation is performed for the second time, the data would directly be served from the cache location.
CloudFront sent the content from the primary server directly to the cache memory of the edge location. As it’s a content delivery system, it tries to cut down the latency and that is why it will happen. If the operation is performed for the second time, the data would directly be served from the cache location.
Q: Is it possible to use direct connect for transferring the objects from the data centers?
Yes, it is possible. Cloud Front simply supports custom origins and thus this task can be performed. However, you need to pay for it depending on the data transfer rates.
Yes, it is possible. Cloud Front simply supports custom origins and thus this task can be performed. However, you need to pay for it depending on the data transfer rates.
Q: When there is a need to consider Provisional IOPS than Standard RDS storage in AWS?
In case you have hosts that are batch oriented, there is a need for same. The reason is provisional IOPs are known to provide faster IO rates. However, they are bit expensive when compared to other options. Hosts with batch processing don’t need manual intervention from the users. It is because of this reason provisional IOPs are preferred.
In case you have hosts that are batch oriented, there is a need for same. The reason is provisional IOPs are known to provide faster IO rates. However, they are bit expensive when compared to other options. Hosts with batch processing don’t need manual intervention from the users. It is because of this reason provisional IOPs are preferred.
Q: Compare RDS, Redshift, and DynamoDB?
RDS is basically a DBM service that is considered for relational databases. It is useful for upgrading and patching of data automatically. However, it works for structured data only. On the other side, Redshift is used in Data analysis. It is basically a data warehouse service. When it comes to DynamoDB, it is considered when there is a need to deal with unstructured data. RDS is quick as compared to both Redshift and DynamoDB. All of them are powerful enough to perform their tasks without errors.
RDS is basically a DBM service that is considered for relational databases. It is useful for upgrading and patching of data automatically. However, it works for structured data only. On the other side, Redshift is used in Data analysis. It is basically a data warehouse service. When it comes to DynamoDB, it is considered when there is a need to deal with unstructured data. RDS is quick as compared to both Redshift and DynamoDB. All of them are powerful enough to perform their tasks without errors.
Q: Is it possible to run multiple DB for Amazon RDS free of cost?
Yes, it’s possible. However, there is a strict upper limit of 750 hours of usage post which everything will be billed as per RDS prices. In case you exceed the limit, you will be charged only for the extra hours beyond 750.
Yes, it’s possible. However, there is a strict upper limit of 750 hours of usage post which everything will be billed as per RDS prices. In case you exceed the limit, you will be charged only for the extra hours beyond 750.
Q: Name the services which can be used for collecting and processing e-commerce data?
Amazon Redshift and Amazon DynamoDB are the best options. Generally, data from the e-commerce websites is in an unstructured manner. As both of them are useful for unstructured data, we can use them.
Amazon Redshift and Amazon DynamoDB are the best options. Generally, data from the e-commerce websites is in an unstructured manner. As both of them are useful for unstructured data, we can use them.
Q: What is the significance of Connection Draining?
There are certain stages when the traffic needs to be re-verified for bugs unwanted files that raise security concerns. Connection draining helps in re-routing the traffic that comes from the Instances and which is in a queue to be updated.
There are certain stages when the traffic needs to be re-verified for bugs unwanted files that raise security concerns. Connection draining helps in re-routing the traffic that comes from the Instances and which is in a queue to be updated.
Comments
Post a Comment