AWS Interview Questions

Photo of author

By admin

Cloud technology has been adopted by many companies since its introduction. Amazon Web Services is one of the cloud computing services that Amazon offers. It is widely used and has the largest market share globally. As it has been adopted a lot, there is also an increase in the demand for professionals who will be able to fill the talent gap and will be able to build, test, and deploy applications on AWS. If you are thinking of pursuing a career in AWS cloud computing, then you should understand its basics and concept for further learning.

Here, we have mentioned some frequently asked questions in interviews. You can go through them and prepare yourself for this technology.

1. Explain AWS (Amazon Web Services).

AWS offers cloud computing solutions and APIs to various organizations globally. Not only the cloud services, but AWS also provides other services such as computation power, database services, content delivery, etc. Organizations use these services on a pay-per-use basis and have to pay only for used services.

With the help of the AWS tools and services, any organization is able to create a distributed computing environment. AWS was launched in 2002 (web services) and 2006 (cloud computing). There are many cloud computing platforms in the market. But the most flexible and cost-effective cloud computing solution is AWS. Currently, AWS offers more than 200 services and products. Many of their services are not directly accessible to the end-users but AWS offers developer APIs for it. AWS provides web services that are also widely used over HTTP for business purposes.

2. What is Amazon Elastic Compute Cloud (EC2) and its Features?

EC2 comes as a part of the AWS services and allows the users to rent virtual computers and run their programs. It helps the users to deploy large-scale applications and boot an AMI (Amazon Machine Language) for accessing the VM. It helps in launching, creating, and stopping server instances for your organization. Using EC2, you need to pay for a second for every active server. Below are some of its features-

  • You can leverage the persistent storage and elastic IP address.
  • EC2 offers a service named Amazon CloudWatch for monitoring the utilization of the resources like CPU, network, database replicas, etc.
  • It offers an auto-scaling feature allowing you to scale as per the changing traffic.

3. What are the Pricing Models of the Amazon EC2 instance?

It is considered to be an important AWS interview question. EC2 instance comes with four types of pricing models that are mentioned below:

  • On-demand instance – On-demand pricing or also known as the pay-as-you-go model. It allows the user to pay only for the resources/services that are being used. You will have to pay by second/hour depending on the instances. The on-demand pricing model is beneficial if you have short working hours and unpredictable as they do not require any upfront payment.
  • Reserved instance – It is the best pricing model to consider if you have a prerequisite for your upcoming requirements. Firms calculate their future EC2 requirements and pay in the beginning to get up to 75% discount which is huge for any business. Reserved instances will save computing capacity for you, and they can be used whenever you want.
  • Spot Instance – If you want an extra amount of computing capacity immediately, organizations can go for spot instances offering up to a 90% discount. You will get a heavy discount on the unused computing capacity
  • Dedicated hosts – A customer can reserve a physical EC2 server with this pricing model.

4. What is Amazon S3?

S3 stands for (Simple Storage Service), which offers scalable object storage space to IT companies. It is one of the earliest services being offered by AWS. S3 comes with an easy-to-use web services interface, allowing users to store and retrieve data from remote locations. S3 has buckets for storing files/data. A universal namespace allows users to create buckets by entering their names. On successfully uploading the file to the assigned S3 bucket, you will receive an HTTP 200 code. Each bucket has a unique name that will generate a unique DNS address. S3 allows users to download the stored data and secure that data using an authentication mechanism to avoid breaches.

5. Explain different types of cloud service models.

There are three different types of cloud services models that we have mentioned below:

  • IaaS – Infrastructure as a Service (IaaS) provides users with access to virtual desktops and servers over the Internet. With this model, a service provider hosts the server, storage, hardware, etc. on behalf of the users. IaaS platforms are highly scalable and can adapt to changing workloads. IaaS providers handle tasks such as system maintenance, backup, and resilience on behalf of their users.
  • Platform as a Service (PaaS) allows service providers to deliver software and hardware tools to their customers. The service model can be used for application development, and anyone can obtain the desired applications from the service provider via the internet. By using PaaS, users do not have to maintain software/hardware in-house for development and testing.
  • Software as a Service (SaaS) provides on-demand software distribution.

6. What is AWS ‘s auto-scaling feature?

The auto-scaling feature in AWS EC2 helps in scaling the computing capacity automatically as per the changing business needs. It allows you to maintain a steady performance of business processes. It will take only a few minutes to scale multiple resources in AWS. Besides EC2, you can also scale other AWS resources and tools whenever required.

7. Explain the benefits of auto-scaling features.

Below we have mentioned the benefits of the EC2 auto-scaling feature:

  • AWS EC2 offers an easy way to configure auto-scaling. Checking the utilization level of different resources is possible within the same interface without having to switch to another console.
  • It helps in automating the scaling processes. It also keeps track of how a resource will respond to changes. In addition to adding resources, it also reduces compute capacity if this is not required.
  • If you have an unpredictable workload, the auto-scaling feature helps in optimizing the application performance.

8. What are S3 storage classes?

S3 storage classes ensure data integrity and assist in concurrent data loss. Every object that is stored within the S3 will have an associated respective storage class. It also helps in maintaining the object lifecycle, allowing automatic migration and thus saves cost.

9. Mention various types of S3 storage classes.

Four different types of S3 storage classes are mentioned as follows:

  • S3 Standard – The data is duplicated and stored across multiple devices in various facilities. If there is a loss of a maximum of 2 facilities at the same time, it can be coped up efficiently. It offers low latency and high throughput, which results in increased durability and availability.
  • This S3 Standard IA – ‘S3 Standard Infrequently Accessed’ is used in situations where you may not access your data frequently, but you must have fast access to it whenever you need it. Like S3 Standard, it ensures the sustain of data loss at a maximum of 2 facilities simultaneously.
  • S3 One Zone Infrequent Access – it is somehow similar to S3 Standard IA. The primary difference between S3 one zone infrequent access and the rest of the storage class is that it offers low availability, i.e., 99.5%. While the availability of S3 standard and standard IA is 99.99%.
  • S3 Glacier – it is one of the cheapest storage classes when compared to other storage classes. It is only used for archiving the data.

10. Explain Policy in AWS and its various Types.

A policy is an object in AWS that is associated with a respective resource. It helps in defining whether the user request will be granted or not. There are six different types of policies in AWS that are mentioned below:

  • Identity-based policies – it is related to a single user’s identity, multiple users, or any particular role. It stores the permissions in JSON format. They are also further divided into managed and inline policies.
  • Resource-based policies – it is related to the resources in AWS. For example S3 bucket.
  • Permissions boundaries – it defines the maximum number of permissions for an object/entity by identity-based policies.
  • SCP – SCP (Service Control Policies) is stored in JSON format. It helps in defining the maximum number of permissions concerning a firm/organization.
  • ACL – ACL (Access Control Lists) defines the principles in some other AWS account for accessing the resources. It is not stored in JSON format.
  • Session policies – Session policies help in limiting the number of permissions granted by a user’s identity-based policies.

11. Explain AWS VPC.

Amazon VPC (Virtual Private Cloud) helps users launch the AWS resources into a virtual network that is defined by the user only. Since the virtual network is defined by the user, they can also control the various aspects of the virtual network, such as the creation of subnet, IP address, etc.

Any organization can install the virtual network and leverage all the benefits offered by AWS. Not only this, they will also be able to create the routing tables (containing rules for defining the direction of the incoming traffic) for their virtual network. You can use the internet gateway of AWS VPC for establishing communication between the virtual network and the internet.

You can also access the Amazon VPC via various interfaces like AWS management console, AWS CLI (Command Line Interface), AWS SDKs, and Query API.

12. Mention various types of elastic load balancers in AWS.

Elastic load balancing in AWS has three different types of load balancers. The load balancers help in routing the incoming traffic in AWS. Here, we have mentioned three types of load balancers in AWS.

  • Application load balancer – this load balancer specifies the routing decisions that are made at the application layer. It helps in providing path-based routing at the HTTP/HTTPS (layer 7). It routes the incoming requests to various container instances. It can route a request to more than one port under the container instances.
  • Network load balancer – this load balancer specifies the decisions that are made at the transport layer (SSL/TCP). It works on a flow hash routing algorithm for determining the target on the port from the group of targets. After selecting the target, it establishes a TCP connection with that target based on the listener configuration.
  • Traditional load balancer – this load balancer will choose either the application layer or the transport layer. It allows the user to map a load balancer port to only one container instance (fixed mapping).

13. Explain NAT gateways in AWS.

NAT stands for Network Address Translation. It is an AWS service that allows you to connect an EC2 instance to the internet. Via NAT, you can use the EC2 instance in a private subnet. It also allows you to connect an EC2 instance to other AWS services.

Even though you can use the EC2 instance in a private subnet, connecting it to the internet via any other method will make it public. However, the NAT will maintain the private subnet while establishing a connection between the EC2 instance and the internet. Users can create NAT gateways or NAT instances to make a connection between EC2 instances and internet/AWS services.

As opposed to NAT instances, which are single EC2 instances, NAT gateways can be used across multiple availability zones. The amount of traffic allowed by a NAT instance depends on the size of the instance.

14. Explain the Amazon Aurora AWS RDS database.

Aurora database is strictly developed in AWS RDS, so you cannot run on any local device with an AWS infrastructure. This relational database is mostly preferred as it provides enhanced availability and speed.

15. Explain Amazon Redshift.

Redshift is a data warehouse service offered by Amazon and is deployed in the cloud. As compared to other data warehouses it is considered to be fast and highly. On average, Redshift offers around ten times more performance & speed than other data warehouses in the cloud. It works on technologies like machine learning, columnar storage, etc. ensuring its high stability and performance. Redshift allows you to scale up to petabytes and terabytes.

Redshift uses OLAP as its analytics processing system and has two nodes for storing data. It offers high speed using its advanced compression and parallel processing. It allows you to add new nodes in the warehouse. Redshift allows the developers to answer a query faster.

16. Explain AMI (Amazon Machine Image).

AMI stands for (Amazon Machine Image) and is used for creating a virtual machine within the EC2 environment. AMI is responsible for deploying the services that are delivered via EC2. The AMI has a read-only filesystem image that comprises an operating system. AMI also has launch permission that will specify which AWS account will launch the instances. While launching a process, the volumes that are attached to an instance are decided by block device mapping in AMI.

The AMI consists of three types of images.

A Public image is an AMI that can be used by any user/client, while users can also use the ‘Paid’ AMI. ‘Shared’ AMI ensures more flexibility to the developer.

17. Explain vertical scaling in AWS.

Whenever the RDS/EC2 servers change the size of the instance for scaling purposes, it is called vertical scaling. In this, we usually pick a larger instance size for scaling up in vertical scaling, while we select a smaller instance size for scaling down. The size of the instance is changed as per the changing demand.

18. Explain horizontal scaling in AWS.

In this type of scaling, the size of an instance is changed as per the requirements. It allows you to modify the number of nodes/instances in a system. The horizontal auto-scaling is based on the number of connections between an instance and the integrated ELB (Elastic Load Balancer).

19. Explain AWS CloudTrail.

AWS CloudTrail helps the users to audit their AWS account while ensuring the compliance and governance of the AWS account. Once you create an AWS account, CloudTrail starts working and keeps records of every activity on your account. You can monitor all the activities via the CloudTrail console. All the AWS services activities will also get recorded. It ensures that you will get enhanced visibility of your account.

20. Explain AWS Lambda.

AWS services offer a computing platform that does not require a server for performing and is called AWS Lambda. Whenever an event triggers, a code gets compiled on AWS Lambda and identifies the required resources for that code compilation. Lambda comes with the support for various languages like Python, Java, Node.js, and others. Using Lambda, you only have to pay for the time the code is getting compiled. It also runs code in response to the HTTP requests and will automatically manage the required resources like memory, CPU, disk space, and others.

21. Explain AWS Shield.

If you want to protect your applications that are running on the AWS from any DDoS attacks, then you can use AWS Shield. It will automatically identify the DDoS attack and will reduce the downtime and latency for the application. There is no need to involve the IT team as everything can be automated using the AWS Shiels. It ensures that all the users are subject to this type of protection. Apart from this, it ensures the real-time visibility and monitoring of the attack on your AWS application.

22. Explain AWS CloudWatch.

You can use the AWS CloudWatch for monitoring the real-time usage of the AWS services and resources. It uses various metrics to understand the usage of the services. By default, you can see various AWS service-related metrics on the dashboard of the AWS CloudWatch. You can even customize and choose the metrics that you want to see. You can access AWS CloudWatch via its console. AWS CLI, AWS SDKs. It also provides you the health details of the AWS services.

23. Mention various types of virtualization in AWS.

AWS supports three different types of virtualization.

  • HVM stands for Hardware Virtual Machine ensures complete virtualization of hardware providing each virtual hardware machine as an independent unit. After the AWS AMI virtualization, this virtual machine will run the master boot record for booting. The AWS machine image consists of the root block device that comes with the master boot record.
  • PV stands for paravirtualization that provides light virtualization as compared to HVM. you need to make some changes to the guest OS to perform anything in PV. Such modifications will allow the users to export scalable and modified hardware to the virtual machines.
  • PV on HVM helps in providing enhanced functionalities that allow the OS to access the storage, network I/O using host via PV on HVM.

24. Explain the cross-region replication service offered by AWS.

You can use cross-region replication for copying the data from one bucket to another. It makes sure that both the buckets are from different regions while copying the data. It allows the asynchronous copy of data between buckets in the same AWS management console. The bucket from which you are copying the data is called the source bucket and the other one is the destination bucket. You need to enable the versioning on both buckets. Once the data has been copied to the destination bucket, you cannot upload to replicate the same data from the source bucket.

25. Explain CloudFront CDN.

CloudFront CDN is a group of distributed servers that are used for delivering web content. The delivery is based on the geographic region of the user, web page origin, and the server. Make sure to define all the origins of the files being distributed using CDN. this origin can be an S3 bucket, AWS instance, etc. CloudFront ensures web distribution or RTMP. Web distribution ensures the delivery of websites and RTMP ensures the delivery of media streaming.

26. Explain AWS Web Application Firewall (WAF).

AWS offers a firewall service that helps in protecting web-based applications from being exploited. It helps in protecting the applications against bots that will reduce the application’s performance and eliminates the unnecessary consumption of the resources. It also helps in controlling the incoming traffic to the web applications. It allows users to create their traffic rules for restricting particular patterns of traffic for enhancing the performance of the applications. You can use the API offered by the AWS WAF for setting rules for the incoming traffic.

27. Explain Simple Notification Service by AWS.

AWS offers a service “Simple Notification Service” that helps in sending a message from one application to another. It is a cost-effective method for publishing messages from one application and sending them to another. It also helps in sending notifications to various mobile devices like Google, Windows phones, Apple, etc. It ensures the grouping of various types of endpoints. It allows several endpoints under one topic, for example, it can group Apple and Android recipients for sending messages to all subscribers.

28. Explain Amazon EMR.

Amazon offers an EMR (Elastic Map-reduce) service that helps in the data processing. This service consists of a group of EC2 instances called clusters. You can call a single EC2 instance within a cluster as a node that has a specific role attached to it. EMR also has a master node that will define the roles of other nodes present in the cluster. The master node will help in monitoring the node’s performance and its overall health.

29. Explain the S3 transfer acceleration service by Amazon.

With the help of this service, you will be able to upload the S3 quickly. This service will not upload the file directly to the S3 bucket but the nearest edge location. It uses a unique URL for uploading the file to the edge location and then transferring it to the required S3 bucket. It uses the CloudFront CDN for accelerating the uploads and optimizes the transfer process. The edge location will automatically transfer the file to the S3 bucket within no time. It also ensures a secure transfer between the clients and the S3 bucket.

30. Mention core services of Amazon Kinesis.

Amazon offers a data streaming platform known as Amazon Kinesis and it offers three core services as mentioned below.

  • Kinesis Streams – it helps in storing the data in shards containing the storage sections that are produced while data streaming. These shards can be accessed by the users to turn them into user data. Once the data has been accessed by users, it will get transferred to the AWS storage like DynamoDB, etc.
  • Kinesis Firehose- it helps in delivering the streaming data to various AWS destinations like S3 buckets, Redshift, etc.
  • Kinesis Analytics– it helps the users to analyze the streaming data and get insights. It allows you to run SQL queries on the stored data within the Kinesis Firehose.

31. Mention benefits of AWS RDS.

Below are the benefits of AWS RDS.

  • It allows you to control and modify the database services like CPU, storage, etc.
  • It helps in ensuring the automatic backup and making the latest config updates to the database.
  • It helps in creating and maintaining the backup instance that can be used in case of failure and prevents loss of the data.
  • It helps in creating RDS read replicas for distributing the read traffic from the source database.

32. Explain the difference between AWS CloudFormation and AWS Elastic Beanstalk.

AWS CloudFormation helps in provisioning the resources available in the cloud. It describes all the infrastructural resources. While Elastic Beanstalk helps in providing a suitable environment for deploying and operating the application.

CloudFormation helps in fulfilling the infrastructural needs of the running applications while Elastic Beanstalk helps in managing the application lifecycle that is deployed in the cloud. CloudFormation takes into account different types of applications, such as enterprise applications, legacy applications, etc., whereas Elastic Beanstalk does not consider the type of application when managing their lifecycle.

33. How does the AWS Config work with AWS CloudTrail.

CloudTrail helps in recording the user API activity in relation to the AWS account. It allows the user to monitor various API activities like response element, caller identity, etc. if you integrate the AWS config along with the CloudTrail, then you will be able to check the configuration details of the associated AWS resources. Both AWS config and CloudTrail will help in identifying if something is wrong with the usages of resources.

AWS config majorly focuses on the changes done to the resources while CloudTrail is more focused on the users that are making the changes. Both services can be used together for enhancing governance, compliance.

34. How not to lose Connectivity if AWS Direct Connect goes down?

To avoid such situations, you must have a backup AWS Direct Connect whenever the original setup fails. Having a backup will automatically shift the connectivity to the backup one. You can even apply the Bidirectional Forwarding Detection that will help you in detecting the failure situations and generate the backup. You can also configure a backup for an IPsec VPN connection that will automatically backup the traffic. If you are using the backup for the IPsec VPN connection then all the traffic will be redirected to the internet.

35. Explain volume in AWS.

Volume is the block-level storage in AWS that can be assigned to an EC2 instance. It works similar to the hard disk that is used for reading or writing the data. If you are using the volume, you need to pay for the data used as it ensures the measurement of the storage section.

36. Explain snapshots in AWS.

A snapshot is considered to be a view of the volume at a particular time. A screenshot is created, whenever you copy the data stored in volume to another location at a single point in time.

37. How to Handle a Failure that occurred during the AWS Lambda Processing.

If you are using the AWS Lambda service for processing the event in synchronous mode then you will get an exception being displayed on the application that you are using during failure. But if the event is being processed in the asynchronous mode, then the function will be called at least three times during failure.

38. Explain Amazon WordSpaces.

Amazon WorkSpaces ensures that you will get virtual or cloud-based desktops for working purposes. These desktops are known as workspaces. If you are using the Amazon WorkSpaces then you do not have to worry about deploying the physical hardware and software. Amazon WorkSpace allows you to install Microsoft Windows or Linux virtual desktops that can be accessed via many devices or web browsers.

It allows you to choose any type of hardware or software configuration. Also, Amazon offers you WAM (WorkSpaces Application Manager) that allows you to deploy and manage various applications on virtual desktops.

39. Explain AWS IAM.

IAM stands for Identity and Access management that helps the users by providing a secure way to access the AWS resources and services. It allows you to create a group of users and assign them a particular set of permissions. If you want to access the features of IAM then you can go to the AWS Management Console section of your AWS account.

40. Mention the difference between security groups and network access control list.

You can use the security groups for controlling the access to use the instances while the network access control list allows you to control the access at the subnet level. Using the network access control list, you will be able to add rules including both “allow” and “deny”, while with security groups, you can only add rules for “allow”.

41. Explain how an application load balancer is good for routing the incoming traffic.

Because it redirects the user’s request for image rendering to the image rendering servers while the request for general computing will be redirected to the computing servers. This will eventually balance the load on various servers.

42. How to manage the read contention on RDS MySQL.

You can manage it by installing the ElastiCache in various availability zones of the EC2 instances. This will create a cached version of the website in various zones. Later, you can add RDS MySQL read replicas to each availability zone for ensuring the website’s better performance. It will not load on the RDS MySQL instance thus solving the read contention issue. Due to the cached version of the website at each available zone allows the users to access it quickly.

43. How to ensure better performance while connecting the data center of your company to the Amazon cloud environment.

First, try to create a virtual private network and then connect the data center to it. Then you will be able to launch the AWS resources in the VPN using VPC. The VPN will then create a secure connection between the data center of the company and the AWS network. Also, make sure to create multiple backups of your company data before moving your data to the cloud.

44. What Approach will you use for Uploading 120 MB of files on Amazon S3.

If you want to upload a file with a size of more than 100 MB then you can leverage the benefits of the AWS S3 multipart upload utility. It allows you to upload the 120MB file into many parts. Each part will be uploaded as an individual part. Once each part is uploaded, it will be merged to get the original file. It reduces the uploading time. You can use AWS S3 commands to upload and download the file. With AWS S3 commands, you will be able to automatically do the multipart uploading/downloading after it evaluates the file size.

45. How will you add the email functionality on your application running on the AWS?

Amazon has a variety of services that will allow you to handle any scenario within AWS. if you want to add the email functionality to your AWS-based application, then you can use its Amazon SES (Simple Email Service) for this. It allows you to set up various types of email services like email forwarding, mass emailing, transactional emailing, etc. it allows a cost-effective solution for integrating the email functionality to all the applications running on the AWS. It is a secured method and allows you to send emails globally.

46. What are the Security Best Practices for Amazon EC2?

For implementing the secured Amazon EC2 best practices, you can follow the below-mentioned steps.

  • You can use the AWS identity and access management for access control to use your AWS resources
  • You can restrict the access and allow only the trusted hosts or networks for accessing various ports on your instance
  • You need to make sure that you check the security group rules regularly
  • You can set the open up permissions wherever you require
  • You can disable the password-based login.

47. Explain the difference between Stopping and Terminating an EC2 instance?

When we talk about stopping and terminating, there is a slight difference between the two. If we talk about stopping an EC2 instance, we mean it will be stopped according to the normal shutdown process. Terminating an EC2 instance, however, refers to the instance being in its stopped state, and all EBS volumes attached to the instance will be deleted.

48. How can you recover/login to an EC2 instance for which you have lost the key?

You can follow the below-mentioned steps for recovering the EC2 instance if you have lost the key:

  • Make sure you verify that the EC2Config service is running
  • First, detach the root volume for the instance
  • Then try to attach that root volume to a temporary instance
  • Make the required changes to the configuration file
  • Then restart the original instance

49. Explain some security products and features available in VPC?

Below are some security products and features:

  • Security groups – it is considered to be a firewall for the EC2 instances that control the inbound and outbound traffic at the instance level.
  • Network Access control lists – It is considered to be a firewall for the subnets that control the inbound and outbound traffic at the subnet level.
  • Flow logs – it helps in capturing the inbound and outbound traffic from the network interfaces in your VPC.

50. How can you add an existing instance to a new Auto Scaling group?

You can follow the below steps for adding an existing instance to a new Auto Scaling group:

  • First, you can open your EC2 console
  • Then select the required instance under Instances
  • Go to Actions -> Instance Settings -> Attach to Auto Scaling Group
  • Now go for a new Auto Scaling group
  • Now attach this group to your Instance
  • Make changes to the Instance if required
  • Now you have successfully added the instance to a new Auto Scaling group

Conclusion

AWS offers various job opportunities. AWS offers a variety of services which are being offered by this company. If you are unfamiliar with Amazon Web Services, you might want to take a look at the basics to familiarize yourself with it. This article will give you a basic understanding of what services it offers and where you can use them. To give you better practical knowledge, we have also included some scenarios.

Leave a Comment