EC2 instance types are grouped into families based on their hardware characteristics — General Purpose, Compute Optimized, Memory Optimized, Storage Optimized, and Accelerated Computing. The naming convention follows a pattern: [Family][Generation][Attributes].[Size].
AWS offers hundreds of EC2 instance types to serve different workloads. Each instance type belongs to a family that describes its primary hardware strength. Choosing the right instance type is critical for both performance and cost efficiency. The naming convention encodes the family, generation, optional attributes, and size in a compact string.
Family letter — describes the primary characteristic: t (burstable), m (general), c (compute), r (memory), i (storage), g (GPU), p (ML/AI), x (extreme memory), d (dense storage), h (HDD storage)
Generation number — higher means newer hardware (e.g., m5 is newer than m4)
Optional attributes — a (AMD CPU), g (AWS Graviton/ARM), i (Intel), n (network optimized), d (NVMe storage), e (extra storage/memory), z (high frequency)
Size — nano, micro, small, medium, large, xlarge, 2xlarge, 4xlarge, 8xlarge, 12xlarge, 16xlarge, 24xlarge, metal
Examples: t3.micro = burstable gen3 micro | c5n.xlarge = compute gen5 network-optimized xlarge | m6g.2xlarge = general gen6 Graviton 2xlarge
t3, t3a, t4g — Burstable performance instances. Low baseline CPU with burst credits. Best for: dev/test environments, small websites, low-traffic apps. Cost-efficient for variable workloads.
m5, m6g, m6i, m7g — Balanced CPU and memory. Best for: web servers, app servers, small databases, backend services, enterprise applications.
mac1, mac2 — macOS-based instances on Apple silicon. Best for: iOS/macOS app development and testing.
c5, c5n, c6g, c6i, c7g — High CPU-to-memory ratio. Best for: batch processing, high-performance web servers, scientific modeling, media transcoding, gaming servers, machine learning inference.
c5n — Same as c5 but with higher network bandwidth (up to 100 Gbps). Best for: HPC and network-intensive workloads.
r5, r6g, r6i, r7g — High memory-to-CPU ratio. Best for: in-memory databases (Redis, Memcached), real-time analytics, large-scale caching, SAP HANA.
x1, x1e, x2gd — Extreme memory (up to 3.9 TB RAM). Best for: in-memory databases, SAP HANA, Apache Spark.
z1d — High frequency (up to 4.0 GHz) with large memory. Best for: financial simulations, EDA (Electronic Design Automation), relational databases.
i3, i3en, i4g, i4i — NVMe SSD-backed storage with very high IOPS. Best for: high-frequency OLTP databases, NoSQL databases (Cassandra, MongoDB), data warehousing.
d2, d3, d3en — High-density HDD storage (up to 336 TB). Best for: Hadoop/HDFS, data warehouses, distributed file systems.
h1 — High disk throughput with HDD. Best for: MapReduce, distributed file systems like HDFS.
p3, p4, p4d — NVIDIA Tesla GPUs. Best for: deep learning training, scientific simulations, molecular modeling.
g3, g4dn, g4ad, g5 — NVIDIA/AMD GPUs. Best for: graphics-intensive applications, ML inference, video encoding, gaming.
trn1 — AWS Trainium chips. Best for: cost-efficient deep learning training.
inf1, inf2 — AWS Inferentia chips. Best for: high-throughput, low-latency ML inference at scale.