Cloud Cost Optimization: What to Do After Your Migration
The first cloud bill after migration is almost never the right cloud bill. Here is a systematic approach to identifying and capturing the cost optimization opportunities that every newly migrated environment contains.
Congratulations — your workloads are in the cloud. Now comes a surprise that catches many organizations off guard: the first monthly cloud bill is larger than expected. Sometimes significantly larger. The instance sizes you selected during migration were conservative estimates. The storage you provisioned was sized for peak, not average. The data transfer costs you did not account for are real and recurring. The managed services you chose for convenience have list price tags that nobody scrutinized during the migration project.
This is normal. The first cloud environment after migration is an approximation — a working environment built under time and resource pressure, with operational correctness prioritized over cost efficiency. The optimization work comes after, once you have real utilization data to work from. The good news is that real utilization data is far more valuable than estimates, and the savings available in most newly migrated environments are substantial. Our clients typically achieve 30 to 50 percent cost reduction through systematic optimization in the first 90 days after migration.
The FinOps Foundation: Tagging and Visibility
Before you can optimize cloud costs, you need to understand where money is going. In cloud environments, this requires a tagging strategy that associates every resource with the application, team, environment, and cost center it belongs to. Without tags, your cloud bill is an undifferentiated heap of charges that nobody can attribute to specific business activities or hold specific teams accountable for.
Implement mandatory tagging at the account or subscription level using AWS Config Rules or Azure Policy to detect and alert on untagged resources. Define a standard tag taxonomy that covers at minimum: application name, environment (production/staging/development), team owner, and cost center code. Apply this taxonomy retrospectively to all existing resources and enforce it prospectively on all new resource creation through infrastructure-as-code templates and guardrails.
With tagging in place, configure AWS Cost Explorer or Azure Cost Management to produce cost breakdowns by tag dimension. Build a dashboard that shows cost by application, by team, and by environment, updated daily. When teams can see the cost of their applications in real-time, behavior changes in ways that no policy mandate can achieve alone. Engineers start thinking about cost as a feature of their system design rather than an afterthought.
Right-Sizing Compute Resources
Right-sizing — matching instance types and sizes to actual workload requirements — is the single highest-impact cost optimization activity in most newly migrated environments. The instance types selected during migration are necessarily based on on-premises performance data and safety margins; actual cloud utilization data typically reveals that substantial downsizing is possible without any performance impact.
AWS Compute Optimizer analyzes CPU, memory, network, and disk utilization over a 14-day lookback window and generates right-sizing recommendations for EC2 instances, EBS volumes, Lambda functions, and ECS tasks. Azure Advisor provides equivalent recommendations for Azure Virtual Machines and other compute services. These tools are free and their recommendations are based on actual utilization data, not estimates — they should be the first tool you deploy in your post-migration optimization program.
Be systematic about implementing right-sizing recommendations. For non-production environments, implement all recommendations immediately — there is no reason to over-provision development and staging environments. For production workloads, review each recommendation and validate it against your performance requirements before implementing. Schedule right-sizing changes during maintenance windows and monitor performance metrics closely for 48 hours after each change to confirm the new sizing is adequate.
Pay particular attention to memory right-sizing. On-premises servers were commonly sized with significant memory headroom because memory was cheaper than the risk of a memory-constrained performance event. In the cloud, you pay for memory continuously whether you use it or not. AWS and Azure both offer granular memory-to-CPU ratio options through their instance families; matching the instance family to your application's actual memory-to-CPU profile can deliver substantial savings compared to using a general-purpose instance type as the default.
Reserved Instances and Savings Plans
On-demand pricing is the correct pricing model during migration and in the first 30-60 days of cloud operation. Once you have a clear picture of your baseline compute requirements — the resources that will be running continuously for the foreseeable future — you should convert that baseline to reserved capacity or savings plans to capture the significant discounts available for committed usage.
AWS offers Reserved Instances (RIs) with discounts of 30-60 percent compared to on-demand pricing for 1- or 3-year commitments. AWS Savings Plans offer similar discounts with more flexibility — Compute Savings Plans apply across instance families, sizes, and regions, which makes them a better fit for environments where instance types may change over time. For most enterprises, a combination of 1-year Compute Savings Plans for the flexible baseline and 1-year RIs for specific high-utilization instances provides the best balance of savings and flexibility.
Azure Reservations and Azure Savings Plans provide analogous benefits on the Azure side. One important consideration: reserved capacity discounts are largest for 3-year commitments, but committing compute capacity three years in advance requires confidence about your future requirements that most organizations do not have immediately after migration. Start with 1-year commitments on well-understood baseline workloads and expand to 3-year terms as your cloud environment stabilizes and your confidence in your future requirements grows.
Storage Optimization
Storage costs are commonly underestimated in cloud cost projections and over-provisioned in newly migrated environments. The default approach during migration — provision enough storage to comfortably hold all existing data plus growth headroom — is correct for migration purposes but leaves significant optimization opportunity on the table post-migration.
For block storage (EBS on AWS, Managed Disks on Azure), audit volumes for those that are attached but heavily underutilized or, even better, those that are not attached to any running instance at all. Detached volumes continue to incur storage charges indefinitely; orphaned volumes are a common source of wasted cloud spend and are easy to identify and eliminate. For attached volumes, review the storage tier: gp2 EBS volumes with low I/O utilization should be considered for conversion to gp3, which provides lower base cost and allows you to independently configure IOPS and throughput rather than paying for the bundled allocation in gp2.
For object storage (S3 on AWS, Azure Blob Storage), implement lifecycle policies that automatically transition data to lower-cost storage tiers as it ages. Data that is accessed frequently in the first 30 days of its life but rarely afterward is a perfect candidate for S3 Intelligent-Tiering, which automatically moves data between access tiers based on access patterns with no retrieval fees for infrequently accessed data. For archival data with predictable access patterns (accessed no more than once per quarter), S3 Glacier Instant Retrieval can reduce storage costs by up to 68 percent compared to S3 Standard.
Database Cost Optimization
Managed database services (RDS, Aurora, Azure SQL Database) provide significant operational value but carry premium pricing that can become a significant cost center if not managed carefully. The same right-sizing principles that apply to compute apply equally to database instances — and database instances are often even more over-provisioned than compute, because database performance issues have historically been addressed by throwing hardware at the problem.
Review RDS instance sizes against actual CPU, memory, and IOPS utilization. Aurora in particular can significantly over-provision storage capacity due to its minimum cluster volume requirements; for smaller databases, consider whether Aurora Serverless v2 provides a better cost profile by automatically scaling capacity to match actual demand rather than provisioning for peak. For databases with predictable workload patterns, Reserved Instances for RDS provide the same 30-60 percent discounts as for EC2 and should be applied to any database instance expected to run continuously for the next twelve months.
Building a Continuous Cost Management Practice
Cost optimization is not a one-time project — it is an ongoing practice. Cloud environments change continuously: new resources are provisioned, utilization patterns shift, new pricing options become available. Organizations that treat cost management as a quarterly audit rather than a continuous process consistently spend more than those that integrate cost awareness into their day-to-day engineering and operations practices.
Establish a FinOps team or assign FinOps responsibilities to an existing cloud platform team. Set monthly cost budgets per team or application and implement alerting when spend exceeds 80 percent of the monthly budget. Schedule monthly cost reviews where team leads examine their cost trends and identify optimization opportunities. Build cost metrics into your engineering KPIs alongside reliability and performance — cost efficiency is not a finance department problem, it is an engineering quality indicator.
Key Takeaways
- Expect 30-50 percent cost reduction potential in newly migrated environments — the first cloud bill reflects migration-time estimates, not optimized provisioning.
- Mandatory resource tagging is the prerequisite for all cost optimization — without attribution, you cannot manage what you cannot measure.
- AWS Compute Optimizer and Azure Advisor are free tools that generate data-driven right-sizing recommendations — deploy them in the first week post-migration.
- Convert baseline compute to Reserved Instances or Savings Plans once your requirements are clear — discounts of 30-60 percent are available for 1-year commitments.
- Storage lifecycle policies and intelligent tiering can reduce object storage costs by 40-70 percent for data with typical access aging patterns.
- FinOps is an ongoing practice, not a one-time optimization project — build cost visibility and accountability into your regular engineering operations.
Conclusion
The organizations that get the most value from cloud are not those that pay the least initially — they are those that build the organizational practices and technical disciplines to continuously optimize their cloud investment over time. The post-migration optimization work described here is the starting point for that practice, not the end state. As your cloud environment evolves and your engineering teams develop deeper cloud fluency, the optimization opportunities will shift and new categories of savings will emerge.
If you have completed a migration recently and your cloud bill is higher than you expected, we recommend starting with the tagging audit and the compute right-sizing analysis — these two activities consistently deliver the fastest return and establish the visibility foundation for everything else. Our team is available to review your environment and provide a structured cost optimization assessment if you would like a guided approach.