AWS Dictionary

AWS Dictionary

·

29 min read

Amazon Augmented AI

  • Amazon Augmented AI (Amazon A2I) provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents.

  • With Amazon A2I, a person can also create their own workflows for machine learning models built on Amazon SageMaker or any other tools.


Amazon Aurora

  • Amazon Aurora is a fully managed relational database engine that's compatible with MySQL and PostgreSQL.

  • Amazon Aurora is an enterprise-class relational database.

  • They use structured query language (SQL) to store and query data.


Amazon CloudFront

  • Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. It helps to securely deliver content with low latency and high transfer speeds.

  • Amazon CloudFront uses edge locations to store cached copies of customer content closer to customers for faster delivery.


Amazon CloudWatch

  • Amazon CloudWatch monitors applications and respond to system-wide performance changes

AWS DeepRacer

  • It is an autonomous 1/18 scale race car that tests reinforcement learning models.

  • AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other machine learning methods.

  • Its super power is that it learns very complex behaviors without requiring any labeled training data, and can make short term decisions while optimizing for a longer term goal.


Amazon DocumentDB

  • Amazon DocumentDB is a document database service that supports MongoDB workloads.

Amazon DynamoDB

  • Amazon DynamoDB is a key-value database service. A key-value database might include data pairs such as “Name: John Doe,” “Address: 123 Any Street,” and “City: Anytown”.

  • In a key-value database, you can add or remove attributes from items in the table at any time. Additionally, not every item in the table has to have the same attributes.


Amazon EC2 Auto Scaling

  • Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demand.

Amazon EC2 pricing

EC2 Instance Savings Plans

  • Reduces compute costs by committing to a consistent hourly spend for a 1-year or 3-year term. This results in savings of up to 72% over On-Demand Instance costs.

  • Any EC2 usage up to the commitment is charged at the discounted Savings Plan rate (for example, $10 an hour). Any EC2 usage beyond the commitment is charged at regular On-Demand Instance rates.

Reserved Instances

  • Reserved Instances are a billing discount that is applied to the use of On-Demand Instances in an AWS account.

  • A business can purchase Standard Reserved and Convertible Reserved Instances for a 1-year or 3-year term.

  • Unlike EC2 Instance Savings Plans, Reserved Instances do not require an hourly spend commitment over the duration of the contract term.

Dedicated Hosts

  • They are physical servers with EC2 instance capacity that is fully dedicated to a single customer. A business can use existing per-socket, per-core, or per-VM software licenses to help maintain license compliance.

  • A business can purchase On-Demand Dedicated Hosts or Reserved Dedicated Hosts.

  • Of all the Amazon EC2 options, Dedicated Hosts are the most expensive.

Spot Instances

  • Spot Instances are ideal for workloads with flexible start and end times or that can withstand interruptions.

  • Spot Instances leverage unused EC2 computing capacity and offer cost savings at up to 90% of On-Demand Instance prices.


Amazon Elastic Kubernetes Service (Amazon EKS)

  • Amazon EKS is a fully managed service that runs Kubernetes on AWS. Kubernetes is open-source software that deploys and manages containerized applications at scale.

  • Containers provide a standard way to package an application's code and dependencies into a single object. Containers are frequently used for processes and workflows in which there are essential requirements for security, reliability, and scalability.


Amazon ElastiCache

  • A service that provides the capability to create, manage, and scale a distributed in-memory or cache environment in the cloud.

  • A service that adds caching layers on top of databases to help improve the read times of common requests.


Amazon Lex

  • Amazon Lex is a service that builds conversational interfaces using voice and text.

Amazon Neptune

  • It is a graph database service.

  • Amazon Neptune provides the capability to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.


Amazon Quantum Ledger Database (Amazon QLDB)

  • Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. A person can use Amazon QLDB to review a complete history of all the changes that have been made to application data.

Amazon RDS

  • Amazon RDS is a managed relational database service.

  • They use structured query language (SQL) to store and query data.


Amazon Redshift

  • Amazon Redshift is a data warehousing service for providing big data analytics.

  • It offers the ability to collect data from many sources and provides insight into relationships and trends across a data set.


Amazon Route 53

  • Amazon Route 53 is a DNS web service.

  • It gives developers and businesses a reliable way to route end users to internet applications that are hosted in AWS.

  • Additionally, businesses can transfer DNS records for existing domain names that are currently managed by other domain registrars, or register new domain names directly within Amazon Route 53.


Amazon S3

  • It is a service that provides object-level storage. Amazon S3 stores data as objects within buckets.

  • Amazon S3 provides managed object storage with no servers to launch, patch or configure.

Storage Classes

The Amazon S3 Standard-IA

  • This storage class is ideal for data that is infrequently accessed but requires high availability when needed.

  • Both Amazon S3 Standard and Amazon S3 Standard-IA store data in a minimum of three Availability Zones.

  • Amazon S3 Standard-IA provides the same level of availability as Amazon S3 Standard but at a lower storage price.

Amazon S3 Intelligent-Tiering

  • In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects access patterns. If an object has not accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA.

  • If an object is accessed in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.

Amazon S3 One Zone-IA

  • It is ideal for infrequently accessed data that does not require high availability.

Amazon S3 Glacier Flexible Retrieval

  • It is a low-cost storage class that is ideal for data archiving.

  • A person can retrieve objects stored in the Amazon S3 Glacier Flexible Retrieval storage class within a few minutes to a few hours.

Amazon S3 Glacier Deep Archive

  • Amazon S3 Glacier Deep Archive is designed for long-term storage of archival data.

  • It is a very cost-effective archival storage.


Amazon SageMaker

  • Amazon SageMaker is a service that provides the capability to quickly build, train, and deploy machine learning models.

Amazon SQS

  • It is a message queuing service.

  • Using Amazon SQS, an application developer can send, store, and receive messages between software components at any volume size, without losing messages or requiring other services to be available.

  • In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.


Amazon Textract

  • Amazon Textract is a machine learning service that automatically extracts text and data from scanned documents.

Auto Scaling group

  • contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.

Availability Zone (AZ)

  • A fully isolated portion of the AWS global infrastructure.

An Availability Zone is a single data center or a group of data centers within a Region.

Availability Zones are located tens of miles apart from each other. This helps them to provide interconnectivity to support the services and applications that run within a Region.


AWS Artifact

  • Access AWS security and compliance reports and special online agreements

AWS Auto Scaling

  • A service that monitors applications and automatically adds or removes capacity from resource groups in response to changing demand.

AWS Budgets

  • You can view the breakdown of your costs based on service, tag or other metrics. AWS Budgets allows you to be proactive about your cloud spending.

  • AWS Budgets lets businesses set custom alerts that will notify individuals when a service usage exceeds (or is forecasted to exceed) the amount that has been budgeted.


AWS Cloud Adoption Framework (CAF)

  • The AWS Cloud Adoption Framework (CAF) is a set of guidelines, best practices, and resources provided by Amazon Web Services (AWS) to help organizations effectively plan and implement their cloud adoption journey.

  • It serves as a comprehensive framework to assist businesses in aligning their strategies, processes, and technology with the capabilities of the AWS Cloud.

The AWS Cloud Adoption Framework (CAF) helps organizations navigate their cloud journey effectively by outlining six key perspectives to consider:

1. Business Perspective:

  • Focuses on: Aligning cloud adoption with business goals and objectives.

  • Key capabilities: Strategy & value alignment, business case validation, financial & return on investment (ROI) modeling.

  • Benefits: Drives clear direction for cloud adoption, ensures alignment with overall business strategy, and measures tangible impact.

  • Think about why you're using the cloud and make sure it helps your business grow.

2. People Perspective:

  • Focuses on: Developing the skills and mindset needed for successful cloud adoption.

  • Key capabilities: Culture & change management, talent development & training, workforce engagement & communication.

  • Benefits: Empowers employees to leverage the cloud effectively, minimizes resistance to change, and fosters a supportive environment for innovation.

  • Make sure your team knows how to use the cloud and is ready for any changes it brings.

3. Governance Perspective:

  • Focuses on: Establishing policies and procedures for secure and compliant cloud usage.

  • Key capabilities: Risk management & compliance, legal & regulatory considerations, identity & access management, auditing & logging.

  • Benefits: Mitigates risk, ensures compliance with relevant regulations, and promotes responsible cloud governance.

  • Have rules and plans in place to manage and control your cloud resources.

4. Platform Perspective:

  • Focuses on: Designing and architecting a secure and scalable cloud environment.

  • Key capabilities: Architecture & design principles, service selection & optimization, cost optimization, automation & tooling.

  • Benefits: Enables efficient and cost-effective cloud infrastructure, fosters agility and scalability, and optimizes resource utilization.

  • Choose the right cloud tools and set them up correctly to meet your needs.

5. Security Perspective:

  • Focuses on: Protecting data and workloads in the cloud.

  • Key capabilities: Threat detection & mitigation, data security & encryption, incident response & recovery, security & compliance monitoring.

  • Benefits: Ensures data confidentiality, integrity, and availability, minimizes security risks, and builds trust in the cloud environment.

  • Keep your data safe and make sure only the right people can access it.

6. Operations Perspective:

  • Focuses on: Managing and automating cloud operations for efficiency and reliability.

  • Key capabilities: Continuous monitoring & management, incident response & remediation, resource provisioning & optimization, automation & DevOps practices.

  • Benefits: Streamlines cloud operations, improves performance and reliability, and reduces manual effort.

  • Keep an eye on how your cloud is running and make improvements when needed.


AWS CloudTrail

  • AWS CloudTrail allows you to capture and consolidate user activity and API usage across AWS regions and accounts on a single, centrally controlled platform.

  • CloudTrail is used to fetch the API calls which can give all the details about the user who performed what actions.

  • Events are typically updated in CloudTrail within 15 minutes after an API call was made.

  • A person can filter events by specifying the time and date that an API call occurred, the user who requested the action, the type of resource that was involved in the API call, and more.


AWS Cloud Formation

  • Provision resources by using programming languages or a text file.

AWS Command Line Interface

  • The AWS Command Line Interface (AWS CLI) provides the capability to control multiple AWS services directly from the command line within one tool.

  • For example, a person can use commands to start an Amazon EC2 instance, connect an Amazon EC2 instance to a specific Auto Scaling group, and more.

  • The AWS CLI is available for users on Windows, macOS, and Linux.


AWS Cost Anomaly Detection

  • AWS Cost Anomaly Detection uses machine learning to continuously analyze your AWS usage and spending patterns.

  • It alerts you when there are unexpected cost variances and provides insights into potential root causes.

  • AWS Cost Anomaly Detection specifically focuses on detecting anomalies and irregularities in your spending behavior.


AWS Cost Explorer

  • View and analyze your costs and usage in near real-time free of charge with the LEAST management overhead.

  • You can view data for up to the last 13 months, forecast how much you're likely to spend for the next 12 months.

  • Using Cost Explorer Business Intelligence tool, you can evaluate your AWS services (e.g. EC2, S3, RDS, etc...) cost and usage across a variety of different views based on filters.


AWS Direct Connect

  • AWS Direct Connect is a service that establishes a dedicated private connection between an on-premises data center and VPC.

  • The private connection that AWS Direct Connect provides helps reduce network costs and increase the amount of bandwidth that can travel through a network.


AWS Elastic Beanstalk

  • AWS Elastic Beanstalk is a service for deploying and scaling web applications and services.

  • Upload your code and Elastic Beanstalk automatically handles the deployment from capacity provisioning, load balancing, and auto scaling to application health monitoring.


AWS Global Infrastructure

Region

  • A separate geographical location with multiple locations that are isolated from each other

Availability Zone (AZ)

  • A fully isolated portion of the AWS global infrastructure.

  • An Availability Zone is a single data center or a group of data centers within a Region.

  • Availability Zones are located tens of miles apart from each other. This helps them to provide interconnectivity to support the services and applications that run within a Region.

  • The server from which Amazon CloudFront gets files - This response option describes an origin.

  • A site that Amazon CloudFront uses to cache copies of content for faster delivery to users at any location - This response option describes an Edge location.


AWS Identity and Access Management (IAM)

  • AWS Identity and Access Management (IAM) is a service that manages access to AWS services and resources.

IAM Role

  • An IAM role is an identity that you can assume to gain temporary access to permissions.
  1. AWS Instance Role:

    • This role is associated with an EC2 instance (virtual server) in AWS.

    • It allows the instance to interact with other AWS services securely without needing to expose long-term credentials.

    • Commonly used for granting permissions to EC2 instances to access resources like S3 buckets or databases.

  2. AWS Web Identity Role:

    • Used when you want to give users temporary access to AWS resources after they authenticate using a web identity provider like Amazon Cognito, Facebook, or Google.

    • Enables applications to authenticate users with these identity providers and obtain temporary credentials to access AWS resources.

  3. AWS Task Role:

    • Associated with an AWS Fargate task within the ECS (Elastic Container Service) platform.

    • Allows the containers in the task to have specific permissions for interacting with other AWS services.

    • Helps manage permissions for containers running in a serverless compute engine like AWS Fargate.

  4. AWS Service Role:

    • Used to grant permissions to AWS services, allowing them to interact with other AWS resources on your behalf.

    • Commonly used for services like AWS Lambda, which may need to access S3, DynamoDB, etc., during execution.

    • Service roles are not associated with a specific AWS identity (like a user or group) and are assumed by AWS services.


AWS Key Management Service (AWS KMS)

  • It is a service that creates, manages, and uses cryptographic keys.

AWS Lambda

  • AWS Lambda is a service that runs code without needing to provision or manage servers.

  • While using AWS Lambda, businesses pay only for the compute time that they consume. They are charged only when their application code is running.

  • With AWS Lambda, they can run code for virtually any type of application or backend service, all with zero administration.


AWS Marketplace

  • A digital catalog that includes thousands of listings from independent software vendors.

  • Businesses can use AWS Marketplace to find, test, and buy software that runs on AWS.


AWS Organization

  • In AWS Organizations, businesses centrally control permissions for their accounts by using service control policies (SCPs).

  • Additionally, businesses can use the consolidated billing feature in AWS Organizations to combine usage and receive a single bill for multiple AWS accounts.


AWS Outposts

  • It is a service that runs infrastructure in a hybrid cloud approach.

  • AWS OUTPOSTS AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience.


AWS Partner Network (APN)

  • AWS Partner Network (APN) Partners help you grow and scale your organization on AWS.

  • AWS Partners are uniquely positioned to help you accelerate your journey to the cloud and take full advantage of all that AWS has to offer. Partners can help you identify solutions, build, deploy, or migrate to AWS.

  • In simple terms, the AWS Partner Network (APN) is a program created by Amazon Web Services (AWS) that brings together different types of companies to help customers succeed in the cloud. These companies, known as partners, can be experts in providing advice, building software, managing cloud services, and more. APN acts as a bridge, connecting AWS customers with a variety of skilled and certified partners, making it easier for businesses to achieve their goals in the cloud.


AWS Pricing Calculator

  • AWS Pricing Calculator creates an estimate for the cost of a business' use cases on AWS.

  • In the AWS Pricing Calculator, a person can enter details for their cloud computing requirements and then receive a detailed estimate that can be exported and shared.


AWS Quick Starts

  • Automate the deployment of workloads into an AWS environment.

  • Amazon Quick Starts are AWS (Amazon Web Services) deployment solutions designed to help users quickly and easily deploy a variety of AWS solutions in a production-ready environment. These Quick Starts are built by AWS architects and partners to follow AWS best practices for security, high availability, and scalability.

  • The primary goal is to accelerate the deployment of complex architectures on AWS by providing pre-configured templates and step-by-step guides.


AWS Snow Family

AWS Snowmobile

  • AWS Snowmobile is a service that is used for transferring up to 100 PB of data to AWS.

  • Each Snowmobile is a 45-foot long shipping container that is pulled by a semi-trailer truck.


AWS Support

  • A resource that can answer questions about best practices and assist with troubleshooting issues.

AWS Trusted Advisor

An online tool that inspects an AWS environment and provides real-time guidance in accordance with AWS best practices.

  • The performance category helps improve the performance of services by providing recommendations for how to take advantage of provisioned throughput.

  • The Security category includes checks that review permissions and identify which AWS security features to enable.

  • The Cost Optimization category includes checks for unused or idle resources that could be eliminated and provide cost savings.

  • The Fault Tolerance category includes checks to help improve an application's availability and redundancy.


AWS WAVELENGTH

  • AWS Wavelength embeds AWS compute and storage services within 53 networks, providing mobile edge computing infrastructure for developing, deploying, and scaling ultra-love latency applications.

AWS Well-Architected Framework

  • Using the Framework helps you learn architectural best practices for designing and operating secure, reliable, efficient, cost-effective, and sustainable workloads in the AWS Cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism.

The framework is based on six pillars:

  • Operational Excellence

  • Security

  • Reliability

  • Performance Efficiency

  • Cost Optimization

  • Sustainability

Operational excellence

Operational Excellence is more about the processes, procedures, and day-to-day operations of your workloads, emphasizing automation, monitoring, and incident response.

Focuses on running and monitoring systems to deliver business value, continually improving processes and procedures.

Design principles

The following are the design principles for operational excellence in the cloud:

  • Perform operations as code: In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure, etc.) as code and update it with code. You can script your operations procedures and automate their process by launching them in response to events. By performing operations as code, you limit human error and create consistent responses to events.

  • Make frequent, small, reversible changes: Design workloads that are scaleable and loosely coupled to permit components to be updated regularly. Automated deployment techniques together with smaller, incremental changes reduces the blast radius and allows for faster reversal when failures occur. This increases confidence to deliver beneficial changes to your workload while maintaining quality and adapting quickly to changes in market conditions.

  • Refine operations procedures frequently: As you evolve your workloads, evolve your operations appropriately. As you use operations procedures, look for opportunities to improve them. Hold regular reviews and validate that all procedures are effective and that teams are familiar with them. Where gaps are identified, update procedures accordingly. Communicate procedural updates to all stakeholders and teams. Gamify your operations to share best practices and educate teams.

  • Anticipate failure: Perform “pre-mortem” exercises to identify potential sources of failure so that they can be removed or mitigated. Test your failure scenarios and validate your understanding of their impact. Test your response procedures to ensure they are effective and that teams are familiar with their process. Set up regular game days to test workload and team responses to simulated events.

  • Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures. Share what is learned across teams and through the entire organization.

  • Use managed services: Reduce operational burden by using AWS managed services where possible. Build operational procedures around interactions with those services.

  • Implement observability for actionable insights: Gain a comprehensive understanding of workload behavior, performance, reliability, cost, and health. Establish key performance indicators (KPIs) and leverage observability telemetry to make informed decisions and take prompt action when business outcomes are at risk. Proactively improve performance, reliability, and cost based on actionable observability data.

Security

Design principles

In the cloud, there are a number of principles that can help you strengthen your workload security:

  • Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize identity management, and aim to eliminate reliance on long-term static credentials.

  • Maintain traceability: Monitor, alert, and audit actions and changes to your environment in real time. Integrate log and metric collection with systems to automatically investigate and take action.

  • Apply security at all layers: Apply a defense in depth approach with multiple security controls. Apply to all layers (for example, edge of network, VPC, load balancing, every instance and compute service, operating system, application, and code).

  • Automate security best practices: Automated software-based security mechanisms improve your ability to securely scale more rapidly and cost-effectively. Create secure architectures, including the implementation of controls that are defined and managed as code in version-controlled templates.

  • Protect data in transit and at rest: Classify your data into sensitivity levels and use mechanisms, such as encryption, tokenization, and access control where appropriate.

  • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data. This reduces the risk of mishandling or modification and human error when handling sensitive data.

  • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align to your organizational requirements. Run incident response simulations and use tools with automation to increase your speed for detection, investigation, and recovery.

Reliability

The Reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to.

Design principles

There are five design principles for reliability in the cloud:

  • Automatically recover from failure: By monitoring a workload for key performance indicators (KPIs), you can start automation when a threshold is breached. These KPIs should be a measure of business value, not of the technical aspects of the operation of the service. This provides for automatic notification and tracking of failures, and for automated recovery processes that work around or repair the failure. With more sophisticated automation, it’s possible to anticipate and remediate failures before they occur.

  • Test recovery procedures: In an on-premises environment, testing is often conducted to prove that the workload works in a particular scenario. Testing is not typically used to validate recovery strategies. In the cloud, you can test how your workload fails, and you can validate your recovery procedures. You can use automation to simulate different failures or to recreate scenarios that led to failures before. This approach exposes failure pathways that you can test and fix before a real failure scenario occurs, thus reducing risk.

  • Scale horizontally to increase aggregate workload availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload. Distribute requests across multiple, smaller resources to verify that they don’t share a common point of failure.

  • Stop guessing capacity: A common cause of failure in on-premises workloads is resource saturation, when the demands placed on a workload exceed the capacity of that workload (this is often the objective of denial of service attacks). In the cloud, you can monitor demand and workload utilization, and automate the addition or removal of resources to maintain the more efficient level to satisfy demand without over- or under-provisioning. There are still limits, but some quotas can be controlled and others can be managed (see Manage Service Quotas and Constraints).

  • Manage change in automation: Changes to your infrastructure should be made using automation. The changes that must be managed include changes to the automation, which then can be tracked and reviewed.

Performance efficiency

Performance Efficiency is focused on optimizing the performance of your workloads by selecting the right resources, scaling appropriately, and continually reviewing and optimizing for efficiency.

Design principles

The following design principles can help you achieve and maintain efficient workloads in the cloud.

  • Democratize advanced technologies: Make advanced technology implementation easier for your team by delegating complex tasks to your cloud vendor. Rather than asking your IT team to learn about hosting and running a new technology, consider consuming the technology as a service. For example, NoSQL databases, media transcoding, and machine learning are all technologies that require specialized expertise. In the cloud, these technologies become services that your team can consume, allowing your team to focus on product development rather than resource provisioning and management.

  • Go global in minutes: Deploying your workload in multiple AWS Regions around the world allows you to provide lower latency and a better experience for your customers at minimal cost.

  • Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional compute activities. For example, serverless storage services can act as static websites (removing the need for web servers) and event services can host code. This removes the operational burden of managing physical servers, and can lower transactional costs because managed services operate at cloud scale.

  • Experiment more often: With virtual and automatable resources, you can quickly carry out comparative testing using different types of instances, storage, or configurations.

  • Consider mechanical sympathy: Use the technology approach that aligns best with your goals. For example, consider data access patterns when you select database or storage for your workload.

Cost Optimization Pillar

Design principles

Consider the following design principles for cost optimization:

  • Implement cloud financial management: To achieve financial success and accelerate business value realization in the cloud, you must invest in Cloud Financial Management. Your organization must dedicate the necessary time and resources for building capability in this new domain of technology and usage management. Similar to your Security or Operations capability, you need to build capability through knowledge building, programs, resources, and processes to help you become a cost efficient organization.

  • Adopt a consumption model: Pay only for the computing resources you consume, and increase or decrease usage depending on business requirements. For example, development and test environments are typically only used for eight hours a day during the work week. You can stop these resources when they’re not in use for a potential cost savings of 75% (40 hours versus 168 hours).

  • Measure overall efficiency: Measure the business output of the workload and the costs associated with delivery. Use this data to understand the gains you make from increasing output, increasing functionality, and reducing cost.

  • Stop spending money on undifferentiated heavy lifting: AWS does the heavy lifting of data center operations like racking, stacking, and powering servers. It also removes the operational burden of managing operating systems and applications with managed services. This allows you to focus on your customers and business projects rather than on IT infrastructure.

  • Analyze and attribute expenditure: The cloud makes it easier to accurately identify the cost and usage of workloads, which then allows transparent attribution of IT costs to revenue streams and individual workload owners. This helps measure return on investment (ROI) and gives workload owners an opportunity to optimize their resources and reduce costs.

Sustainability Pillar

The Sustainability Pillar in AWS (Amazon Web Services) refers to a set of best practices and guidelines designed to help organizations integrate sustainability into their cloud architecture. This pillar focuses on minimizing environmental impact, optimizing resource efficiency, and promoting sustainable practices in cloud computing.

Key aspects of the Sustainability Pillar include:

  • Environmentally Friendly Infrastructure:

    • Utilizing data centers and infrastructure that are designed for energy efficiency.

    • Incorporating renewable energy sources to power data centers.

  • Carbon Footprint Reduction:

    • Implementing strategies to reduce overall carbon emissions associated with cloud services.

    • Encouraging the use of carbon offset programs to neutralize environmental impact.

  • Resource Efficiency:

    • Optimizing resource consumption to minimize waste and improve efficiency.

    • Implementing auto-scaling and dynamic resource allocation to match workloads with resources effectively.

  • Data Center Innovation:

    • Investing in innovative technologies that enhance the efficiency of data center operations.

    • Exploring new cooling methods, energy management systems, and hardware design for sustainability.

  • Sustainable Data Management:

    • Encouraging best practices in data management to reduce unnecessary storage and improve data lifecycle management.

    • Leveraging data compression, deduplication, and archiving strategies to optimize storage resources.

  • Transparent Reporting:

    • Providing transparency in reporting on the environmental impact of cloud services.

    • Sharing metrics and information on energy usage, carbon emissions, and sustainability initiatives.

  • Education and Training:

    • Offering educational resources and training to help organizations understand and implement sustainable practices in the cloud.

    • Promoting awareness about the environmental impact of cloud computing.


Edge Locations

  • Edge locations provide Points of Presence (PoPs) around the globe to help provide access to your content with the lowest latency possible.

Elastic Load Balancing

  • Elastic Load Balancing automatically distributes your incoming application traffic across all the EC2 instances that you are running.

Gateways

Internet gateway

  • It is a connection between a VPC and the internet.

  • It allows public traffic from the internet to access a VPC.

Virtual private gateway

  • It establishes a virtual private network (VPN) connection between a VPC and a private network, such as an on-premises data center or internal corporate network.

  • A virtual private gateway allows traffic into the VPC only if it is coming from an approved network.


Hybrid Cloud

  • Private Cloud + Public Cloud

  • Eg. AWS Outposts + AWS


Migration Evaluation Features

  1. Analysis and Insights: Overview of collection strategy and timeline (existing data or agentless collector), assumptions, scope, server counts, and more.

    • This part explains how data is collected (like from existing sources or without installing anything new), what assumptions are being made, what's included in the analysis, and how many servers are involved.
  2. Financial Summary: Multiple workload specific 'what-if' purchasing scenarios.

    • Here, you can see different ways of buying resources, so you can understand how much it would cost under different conditions.
  3. Business Value: AWS Cloud Value Framework (inclusive of Staff Productivity, Operational Resilience, and Business Agility), Sustainability Assessment (estimated carbon emission reduction versus on-premises workloads).

    • This part looks at how using AWS can help the business, like making staff more productive, operations more reliable, and the business more flexible. It also considers how AWS can help reduce carbon emissions compared to using your own servers.
  4. Deployment Summary: Detailed overview of Windows and Microsoft SQL Licensing options and optimization.

    • Here, you'll get a detailed look at how to license Windows and Microsoft SQL Server on AWS and how to do it in a way that saves money.
  5. Storage Assessment (if applicable): A holistic view of on-premises storage landscape and data-driven mappings to target AWS storage services including projected costs.

    • If relevant, this section looks at how your data storage is set up now and suggests how to move it to AWS storage services. It also estimates how much this would cost.
  6. Next Steps: Recommendations for the customer on next steps for a successful migration.

    • This section gives advice on what to do next to move your resources to AWS successfully.

Migration Strategies

Amazon Web Services (AWS) provides various migration strategies to help businesses and organizations move their applications and workloads to the cloud efficiently. The choice of a migration strategy depends on factors such as the complexity of the application, downtime tolerance, and the level of control required during the migration process. Here are some common migration strategies in AWS:

  1. Rehost (Lift and Shift):

    • This strategy involves moving applications to AWS without making any changes to the code. It's a quick and straightforward approach that minimizes the need for code modifications.

    • Rehosting is suitable for applications with minimal dependencies on the underlying infrastructure, making them easy to migrate.

  2. Replatform (Lift, Tinker, and Shift):

    • In this strategy, applications are migrated to AWS with some optimizations to take advantage of cloud capabilities. It may involve slight modifications to the application architecture or configurations.

    • Replatforming can result in cost savings and improved performance compared to rehosting while maintaining a relatively fast migration process.

  3. Rearchitect (Refactor or Reimagine):

    • Rearchitecting involves making significant changes to the application architecture to fully leverage cloud-native services. This approach can lead to improved scalability, reliability, and performance.

    • It often involves breaking down monolithic applications into microservices, adopting serverless computing, or using managed services like AWS Lambda, Amazon RDS, etc.

  4. Rebuild (Rewrite or Redesign):

    • This strategy involves rebuilding the application from scratch, often using cloud-native technologies and services. While it provides the most benefits in terms of optimization, it is also the most resource-intensive and time-consuming approach.

    • Applications are redesigned to take full advantage of cloud services, providing maximum flexibility, scalability, and cost efficiency.

  5. Repurchase (Replace):

    • Repurchasing involves replacing existing on-premises software with a cloud-based alternative. This could include migrating from an on-premises enterprise resource planning (ERP) system to a Software as a Service (SaaS) solution.

    • It's a strategy often chosen when an organization wants to take advantage of cloud-based software without the need for custom development.

  6. Retire:

    • This strategy involves decommissioning or turning off applications or systems that are no longer needed. It's part of a broader cloud strategy to streamline and optimize the IT environment.
  7. Retain:

    • Some applications may remain on-premises due to regulatory compliance, data residency requirements, or other constraints. This strategy involves keeping certain workloads in the current environment while migrating others to the cloud.

Networking

Amazon VPC

  • Provision an isolated section of the AWS Cloud to launch resources in a virtual network that a person defines.

Subnet

  • A subnet is a section of a virtual private cloud (VPC) in which you can group resources based on security or operational needs.

Network access control list (NACL)

  • It is a virtual firewall that controls inbound and outbound traffic at the subnet level.

Security group

  • It is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.

  • By default, a security group denies all inbound traffic and allows all outbound traffic.

  • Businesses can add custom rules to configure which traffic should be allowed or denied.


Security

AWS Guard Duty

  • It provides intelligent threat detection for your AWS infrastructure and resources.

  • Identifies threats by continually monitoring the network activity and account behavior within an AWS environment.

Amazon Inspector

  • It is a service that checks applications for security vulnerabilities and deviations from security best practices.

Storage

Instance stores

  • They are ideal for temporary data that does not need to be kept long term.

  • When an Amazon EC2 instance is stopped or terminated, all the data that has been written to the attached instance store is deleted.

Amazon EBS volumes

  • They are ideal for data that needs to be retained.

  • When an Amazon EC2 instance is stopped or terminated, all of the data on the attached EBS volume is still available.


Technical Account Manager

  • A resource that provides guidance, architectural reviews, and ongoing communication with companies as they plan, deploy, and optimize their applications.