Amazon EBS (Elastic Block Store) is a high-performance, durable, network-attached block storage service designed for use with Amazon EC2 instances. It provides persistent storage that exists independently of the EC2 instance lifecycle, meaning data survives instance stops, reboots, and even terminations.
Amazon EBS is one of the core AWS storage services. Unlike the ephemeral instance store (which loses data when an instance stops), EBS volumes are persistent — they live independently of the EC2 instances they are attached to. Think of EBS as a virtual hard drive or USB drive in the cloud: you can attach it to an instance, use it as a file system or database storage, detach it, and reattach it to a different instance. EBS is designed for workloads that require low-latency, high-throughput access to data from a single EC2 instance.
Persistent storage — data is retained even after the EC2 instance is stopped or terminated (unless DeleteOnTermination is set to true)
Network-attached — EBS volumes communicate with the EC2 instance over the AWS network, not via a physical cable
Availability Zone-bound — an EBS volume is created in a specific AZ and can only be attached to instances in the same AZ
Scalable — volume size and IOPS can be increased on the fly (Elastic Volumes) without stopping the instance
Snapshot-capable — you can take point-in-time snapshots of EBS volumes to Amazon S3 for backup, migration, and AMI creation
Encryption-ready — supports AES-256 encryption at rest and in transit using AWS KMS keys
Multiple volume types — choose from SSD or HDD-based volumes depending on the workload's IOPS and throughput requirements
Root (boot) volumes for EC2 instances — stores the OS and base software
Relational databases (MySQL, PostgreSQL, Oracle) — require consistent, low-latency block storage
NoSQL databases (MongoDB, Cassandra) — benefit from high IOPS SSD volumes
Data warehousing and analytics — throughput-optimized HDD volumes for large sequential reads
Enterprise applications (SAP, Oracle ERP) — mission-critical workloads needing durable storage
Big data processing — storing large datasets for Hadoop, Spark, and similar frameworks