In Amazon Aurora, you can use fast DDL to run an ALTER TABLE operation in place, nearly instantaneously. The operation completes without requiring the table to be copied and without having a material impact on other DML statements.
Background:
Applications change—you need to change the underlying database schema in response. Query workloads change—you need to add or drop indexes. Data formats change—you need to change the datatypes for existing columns. And change can happen frequently.
As any MySQL DBA knows, these schema changes can disrupt production systems. They are slow. They can take hours or days to complete. They hog system resources. They reduce the throughput of the actual application. Long-running operations can require long crash recoveries. They require write locks for portions of the DDL operation, making parts of the application unavailable. They can require a lot of temporary space and can run out of disk on smaller instances.
How is this handled now?
Let’s take a look at how MySQL implements adding a nullable column at the end of a table.
Here’s the MySQL sequence of operations:
The database takes an exclusive lock on the original table during the prepare phase of the transaction.
It creates a new, empty table with the desired schema.
It copies over one row at a time, updating indexes as it goes. Concurrent data manipulation language (DML) statements are logged into a temp file.
It once again takes an exclusive lock and applies the DML operations from the temp file to the new table. If there are a lot of operations to apply, this process can take a while.
It then drops the original table and renames the new table to be the original table.
There’s a lot of locking, a lot of overhead copying data and building indexes, a lot of I/O, and for active tables, a lot of temp space being used.
Amazon Aurora fast DDL offers a better way:
In Aurora, when a user issues a DDL statement:
The database updates the INFORMATION_SCHEMA system table with the new schema. In addition, the database timestamps the operation, records the old schema into a new system table (Schema Version Table), and propagates this change to read replicas.
Then, on subsequent DML operations, AWS checks to see if the affected data page has a pending schema operation. That’s easily done by comparing the log sequence number (LSN) timestamp for the page with the LSN timestamp of schema changes. If needed, we then update the page to the new schema before applying the DML statement. This operation follows the same upgrade process for redo-undo record pages as everything else. And any I/Os are piggybacked on top of user activity.
To know in detail read the AWS Database Blog: Amazon Aurora Under the Hood: Fast DDL by Anurag Gupta