AWS Interview Questions

AWS stands for Amazon Web Service; it is a collection of remote computing services also known as cloud computing platform. This new realm of cloud computing is also known as IaaS or Infrastructure as a Service.
S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go”.
AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.
From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different compute and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
An AMI includes the following things
• A template for the root volume for the instance
• Launch permissions decide which AWS accounts can avail the AMI to launch instances
• A block device mapping that determines the volumes to attach to the instance when it is launched
Amazon S3 is a REST service, you can send request by using the REST API or the AWS SDK wrapper libraries that wrap the underlying Amazon S3 REST API.
By default, you can create upto 100 buckets in each of your AWS accounts.
Yes, you can vertically scale on Amazon instance. For that • Spin up a new larger instance than the one you are currently running • Pause that instance and detach the root webs volume from the server and discard • Then stop your live instance and detach its root volume • Note the unique device ID and attach that root volume to your new server • And start it again
T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by workload.
With private and public subnets in VPC, database servers should ideally launch into private subnets.
For secure Amazon EC2 best practices, follow the following steps • Use AWS identity and access management to control access to your AWS resources • Restrict access by allowing only trusted hosts or networks to access ports on your instance • Review the rules in your security groups regularly • Only open up permissions that your require • Disable password-based login, for instance, launched from your AMI
The buffer is used to make the system more robust to manage traffic or load by synchronizing different component. Usually, components receive and process the requests in an unbalanced way, With the help of buffer, the components will be balanced and will work at the same speed to provide faster services.
The possible connection errors one might encounter while connecting instances are • Connection timed out • User key not recognized by the server • Host key not found, permission denied • Unprotected private key file • Server refused our key or No supported authentication method available • Error using MindTerm on Safari Browser • Error using Mac OS X RDP Client
Autoscaling is a feature of AWS which allows you to configure and automatically provision and spinup new instances without the need for your intervention. You do this by setting thresholds and metrics to monitor. When those thresholds are crossed a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool.
The most obvious way is to roll-your-own scripts, and use the AWS API tools. Such scripts could be written in bash, perl or other language or your choice. Next option is to use a configuration management and provisioning tool like puppet or better it’s successor Opscode Chef. You might also look towards a tool like Scalr. Lastly you can go with a managed solution such as Rightscale.
ec2-create-group CreateSecurityGroup A. Groups the user created security groups into a new group for easy access. B. Creates a new security group for use with your account. C. Creates a new group inside the security group. D. Creates a new rule inside the security group.
Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail: • Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state. • Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.
The network performance depends on the instance type and network performance specification, if launched in a placement group you can expect up to
• 10 Gbps in a single-flow,
• 20 Gbps in multiflow i.e full duplex
• Network traffic outside the placement group will be limited to 5 Gbps(full duplex).
• Amazon RDS is a database management service for relational databases, it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS is a Db management service for structured data only.
• DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
• Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.
When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.
Scalability is the ability of a system to increase its hardware resources to handle the increase in demand. It can be done by increasing the hardware specifications or increasing the processing nodes. Elasticity is the ability of a system to handle increase in the workload by adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.
CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be able to deliver the log files.
When an event like this occurs, the “automatic rollback on error” feature is enabled, which causes all the AWS resources which were created successfully till the point where the error occurred to be deleted. This is helpful since it does not leave behind any erroneous data, it ensures the fact that stacks are either created fully or not created at all. It is useful in events where you may accidentally exceed your limit of the no. of Elastic IP addresses or maybe you may not have access to an EC2 AMI that you are trying to run etc.
Traditional perimeter security that we're already familiar with using firewalls and so forth is not supported in the Amazon EC2 world. AWS supports security groups. One can create a security group for a jump box with ssh access - only port 22 open. From there a webserver group and database group are created. The webserver group allows 80 and 443 from the world, but port 22 *only* from the jump box group. Further the database group allows port 3306 from the webserver group and port 22 from the jump box group. Add any machines to the webserver group and they can all hit the database. No one from the world can, and no one can directly ssh to any of your boxes. Want to further lock this configuration down? Only allow ssh access from specific IP addresses on your network, or allow just your subnet.
When you launch an instance, the Root Device Volume contains the image used to boot the instance. You can launch an instance from one of two types of AMIs:
1. Instance store-backed AMI
2. EBS based storage
Amazon Web Services provides several ways to access Amazon EC2, like web-based interface, AWS Command Line Interface (CLI) and Amazon Tools for Windows Powershell. First, you need to sign up for an AWS account and you can access Amazon EC2. Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.
Amazon SQS (Simple Queue Service) is a message passing mechanism that is used for communication between different connectors that are connected with each other. It also acts as a communicator between various components of Amazon. It keeps all the different functional components together. This functionality helps different components to be loosely coupled, and provide an architecture that is more failure resilient system.
Configuration management has been around for a long time in web operations and systems administration. Yet the cultural popularity of it has been limited. Most systems administrators configure machines as software was developed before version control – that is manually making changes on servers. Each server can then and usually is slightly different. Troubleshooting though, is straightforward as you login to the box and operate on it directly. Configuration management brings a large automation tool in the picture, managing servers like strings of a puppet. This forces standardization, best practices, and reproducibility as all configs are versioned and managed. It also introduces a new way of working which is the biggest hurdle to its adoption. Enter the cloud, then configuration management becomes even more critical. That’s because virtual servers such as amazons EC2 instances are much less reliable than physical ones. You absolutely need a mechanism to rebuild them as-is at any moment. This pushes best practices like automation, reproducibility and disaster recovery into center stage.
When it is allocated and associated with a stopped instance.
Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images (AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2. Another use case could be for websites hosted on EC2 to load their static content from S3.
Amazon Transfer Acceleration
You would not use Snowball, because for now, the snowball service does not support cross region data transfer, and since, we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network upto 300% compared to normal data transfer speed.
The data transfer can be increased in the following way:
By performing multiple copy operations at one time i.e. if the workstation is powerful enough, you can initiate multiple cp commands each from different terminals, on the same Snowball device.
Copying from multiple workstations to the same snowball.
Transferring large files or by creating a batch of small file, this will reduce the encryption overhead.
Eliminating unnecessary hops i.e. make a setup where the source machine(s) and the snowball are the only machines active on the switch being used, this can hugely improve performance.

Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.
Primary private IP address is attached with the instance throughout its lifetime and cannot be changed, however secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.
To efficiently utilize networks that have a large no. of hosts.
If there is a network which has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.
CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be served from the cached edge.
Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.
Request a Call Back

Enter your Phone Number

Quick Contact

* Required Field
Training Course


Reviews

Get A Free Quote / Need a Help ? Contact Us