Losing My Relations
To use DynamoDB effectively, toss the relational rules out.
- Schema isn't optional. It's just implicit in your access patterns.
 - Joins are replaced by careful key design.
 - Denormalization isn't a shortcut. It's often the right call.
 - You design for query efficiency, not entity purity.
 
So you can't just throw JSON at it and walk away.  However, if you design it right, DynamoDB will reward you with blistering speed, zero-maintenance scaling, and predictable cost. Just don't forget your user#id prefixes.
I finally sat down and mainlined some of Alex DeBrie's DynamoDB content (📺 this talk, 📚 this site). If you're like me, raised on a strict SQL diet with extra 3rd Normal Form, you probably think of NoSQL as a land of chaotic freedom. Just throw JSON at it and walk away, right?
Wrong!
DynamoDB looks like a chill, schema-less key-value store. But it's more like a picky librarian who files everything just so for the sake of performance & peace of mind. And once you start to understand why, you begin to see it not as a lesser cousin to Postgres, but as a different beast entirely, designed for predictable performance at any scale.
The Trap of Schema-less
Yes, technically you can throw arbitrary attributes at a DynamoDB item. But that doesn't mean you should. The schema isn't enforced by the engine. It's enforced by your access patterns. If you don't plan those up front, good luck retrofitting them later.
Partition Keys Are the Real MVP
Everything revolves around the partition key.
- If you're using a simple primary key, it's like a primary key in SQL: unique, one item per key.
 - But with a composite primary key - partition key (PK) & sort key (SK) - you unlock more powerful query patterns.
 - Think of a composite primary key as defining a mini-table: an item collection that shares the same partition key.
 - Since all items with the same partition key are co-located, queries on that key are fast & efficient.
 
So yeah, designing your primary key isn't a footnote. It is your schema.
Think in Access Patterns, Not Entity Models
With relational databases, we normalize first and figure out access patterns later. In DynamoDB, you do the opposite:
- Start by listing how your app will query data.
 - Then structure the data to support those access patterns—potentially duplicating and denormalizing along the way.
 - DynamoDB shines when you can answer a business question with a single query, no joins needed.
 
This shift broke my brain a little, but once I got it, it made sense.
Denormalize, Just Not Recklessly
Denormalization in DynamoDB isn't a sin. It's a strategy. It's fine to duplicate data when either a) the data doesn't change often (or at all), or b) it's not replicated in too many places. You can also use complex attribute structures to pack nested data into a single item as long as you stay under the 400KB item limit. The examples given are that something like payment methods is a good candidate while orders is not considering the unpredictable growth. Remember: no joins. If you need multiple records to answer a question, try to pre-join them via clever PK+SK design.
Field Identifiers Are a Pattern, Not a Hack
I kept seeing attribute values like user#1234 or ts#2023-01-01T00:00:00Z#order123. At first I thought this was some lazy string prefixing.  Nope!  It's a deliberate pattern to:
- Enable polymorphism on the same table
 - Support efficient querying and sorting
 - Model hierarchical or grouped data
 
It's like turning your strings into mini-schemas. Embrace it.
Indexes: Read the Fine Print
- Local Secondary Indexes (LSIs): Must be defined at table creation. You can't add them later.
 - Global Secondary Indexes (GSIs): You can add them later, but they're eventually consistent by default.
 - Every index copies selected attributes, so choose wisely, or you'll pay in storage and write costs.
 
Bottom line: indexes aren't just "make this faster." They're materialized views with cost and consistency tradeoffs.
Other Odds & Ends
- No persistent connections means no need for a connection pooler - one less headache especially in Lambda Land.
 - Items = rows in SQL parlance, but they can vary wildly in shape.
 - Max item size is 400KB. That's the ceiling. Plan accordingly.
 
That's me in the IDE That's me in the console Losing my relations Trying to keep up with load And I know Dynamo can do it Dynamo scales so much
Dynamo is bigger Bigger than YOU??? And YOU is not ME??? The lengths you have to go to To de-normalize and pre-join your data Your data deserves that much