Companies consider using AWS Cloud when the want to save time and money and relieve technical human resources from tasks that do not generate value. However, the real promises of Cloud solutions should not obliterate the inherent risks stemming from a poor management of the related resources: increase of the costs, security breaches or decrease of the operational performance…
Here are ten of the most frequently mistakes that we have encountered, together with some tips on how to best avoid them. This should help you benefit the power of AWS Cloud solutions.
1. Granting lax permissions
Not everyone needs to have administrator privilege! Granting broad permissions to people who do not specifically need them makes your AWS environment more vulnerable to manipulation errors and configuration problems.
Be careful when setting up your authorizations, especially when your teams grow, and apply the principle of “least privilege”: developers must have the authorization level that is just right to accomplish their tasks. No more, no less!
2. Performing all tasks with the root account
Daily tasks should never be performed from your root account. Instead, create an administrator group for users who need to access the console and respect the principle of least privilege for others. Use roles to assign access to services based on the real needs of each developer.
Access keys must also be protected, shared with caution, and renewed frequently. Always enable multi-factor authentication on your root accounts.
3. Skipping AWS CloudTrail
CloudTrail records all actions performed through the console, development kits and other AWS services. Its ability to provide a complete history of all AWS API requests makes it an essential tool for your audit and compliance strategies. It also allows you to track your resource changes and to identify which users have made these changes.
4. Keeping the connections open
Many administrators are keeping network ports open, allowing connections from the outside and compromising the environment’s security.
Once again, we recommend that you apply an open logic that is as selective as possible, using AWS security policies to ensure that only instances and load balancers from one group can communicate with the resources of another group.
5. Underestimating CloudWatch alarms
CloudWatch is the essential tool for DevOps teams to monitor resource usage, applications performance, the emergence of operational issues or new constraints.
Natively integrated with more than seventy AWS services and capable of a granularity at the second scale, this tool enables an active alert to be defined as soon as a specific indicator exceeds the specified limit value. Thus, CloudWatch is essential to monitor and adjust resource usage.
6. Overestimating the responsibility and the assistance of AWS
Remember: AWS is responsible for Cloud Security and you are responsible for your own security in the Cloud. Security, maintenance and troubleshooting of your applications depend entirely on your engineering team.
While some organizations (especially the small specialized development groups) prefer to outsource their maintenance, all managers monitoring a Cloud infrastructure should be familiar with the AWS shared responsibility model.
7. Not using Auto Scaling
Auto Scaling allows you to develop your scaling plan from a single interface for various resources, distributed among several departments. This allows you to maintain the balance and schedule the scalability of Amazon EC2 and Spot Fleets instances, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora replicas. Auto Scaling also provides you with useful recommendations to find the right balance between costs and performance.
8. Ignoring Trusted Advisor
Trusted Advisor monitors client configurations and suggests to your administrators best practices for implementing their resources according to four criteria: cost optimization, security, breakdown tolerance and performance optimization. The tool becomes a dashboard that shows you areas for improvement and provides weekly reports.
9. Ignoring Ad Hoc Instances
The Ad Hoc Instances offer additional calculation capabilities available in the AWS Cloud at very attractive prices. The principle is simple: you define the maximum rate you are willing to pay for this one-time calculation resource and the instance stops as soon as the price exceeds the predefined amount.
Ad Hoc Instances should obviously not be used for critical processes but they are ideal for workloads that are not time-sensitive, such as some large-scale data analytics projects.
10. Ignoring training and certification
Your teams share a common environment. So it makes sense that they have the same level of understanding when it comes to resource management. Cloud computing is also a culture that requires your teams to be aligned as they develop your stack. Training and certification are a valuable asset in this context to ensure that everyone speaks the same language.
Conclusion
Mastering the resources and solutions the Cloud offers, results from a combination of technical skills and experience. This experience is built on concrete use cases in real business contexts. Being able to identify the main sources of errors and potential risk areas also helps to speed up the process and achieve a better balance between performance, cost and security. Our experts are here to help and advise you in the implementation of your Cloud Analytics strategy!