About Cloud SQL backups

This page describes how backups work for your Cloud SQL instances, and the backup options available for you to choose from. For an overview of how to restore data to an instance from the backup, see Overview of restoring an instance.

Cloud SQL lets you back up your instances on-demand or automatically using a backup schedule. Cloud SQL backups are incremental and help you restore lost data to your Cloud SQL instance. With backups, you can:

  • Restore your instance to a previous state if your instance is facing an issue.
  • Set up Disaster Recovery (DR) by creating a new instance using a backup in a different region or zone.
  • Create multiple instances using backups to help in development, testing, and migration.

Cloud SQL backups are also encrypted by default using Google-managed or Customer-Managed Encryption keys (CMEK).

You can retain these backups by defining your instance's backup retention settings. Retention settings can differ based on your instance's Cloud SQL edition and backup option. Additionally, you can also retain backups after your instance is deleted to allow you to restore the instance after deletion.

Cloud SQL offers two options of backups services to manage your backups:

  • Enhanced backups: backups are managed and stored in a centralized backup management project that leverages the Backup and DR Service, and provides enforced retention, granular scheduling, and monitoring.
  • Standard backups: backups are created, managed, and stored in the same project as your Cloud SQL instances.

For more information about each backup option and its features, see Backup options.

Types of backups

Cloud SQL performs either on-demand or automated backups for your Cloud SQL instances.

On-demand backups

On-demand backups are backups that can be created at any time. These are useful if you are about to perform a risky operation on your database, or if you need a backup and don't want to wait for the backup window. You can create on-demand backups for any instance, whether the instance has automatic backups enabled or not.

Automated backups

Automated backups are taken at a scheduled cadence, such as hourly, daily, weekly, or monthly. The scheduled cadence depends on your instance's backup option. The backup starts during the backup window. Cloud SQL recommends scheduling your backups when your instance has less activity, if possible.

We recommend that you don't manually delete any automated backups because they're needed to support point-in-time recovery.

During the backup window, automated backups are regularly taken based on the scheduled cadence when your instance is running. One additional automated backup is taken after your instance is stopped to safeguard all changes prior to the instance stopping. Automated backup retention depends on the configured retention policy in the chosen backup option for your instance.

Take a final backup before instance deletion

Final backups allow you to take a backup of your Cloud SQL instance before you delete the instance. This is useful to retain the instance data after you delete the instance. You can use the final backup later to either create an instance or to restore to an existing instance. For more information about accessing and viewing details about your final backup, see View a list of final backups.

By default, Cloud SQL retains the final backup for 30 days. However, you can customize how long Cloud SQL retains the backup. This can range from 1 day to 365 days for standard backups, or 1 day to 99 years for enhanced backups. You can then restore the instance from the backup as long as it's available. Final backups are charged similar to other backups for the number of days retained.

Retain backups after instance deletion

Retained backups are backups that are retained by Cloud SQL after an instance is deleted. These backups consist of on-demand backups and automated backups created when the instance was live. When you delete an instance, these backups become independent of your instance and are stored at the project level. Retained backups are different from final backups, which are the last backups taken at time of instance deletion.

You can update the description of these backups to make it easier to manage them in your Google Cloud project. Retained backups can be restored to a new or existing Cloud SQL instance at any time.

For these backups, the retention period is defined by the type of backup it is and can't be changed after the instance is deleted. For standard backups, on-demand backups are kept indefinitely until either the backup is manually deleted, or the project containing the backup is deleted. For enhanced backups, on-demand backups are kept based on the selected retention rule. Automated backups are deleted on a rolling basis, one backup per day, after the instance is deleted. The rolling period is defined based on the retention settings of the instance prior to deletion, which can range from 1 day to 99 years, depending on your instance's selected backup option. For example, if your instance's automated backup retention setting was set to 7, then the latest automated backup is deleted 7 days after the instance deletion.

Retained backups can be deleted manually at any time. However, when you delete a retained backup, the deleted backups can't be recovered.

Since instance names can be used after an instance is deleted in Cloud SQL, retained backups are stored in your Google Cloud project with a field called instance_deletion_time. This field allows you to identify whether a particular backup belongs to a live or deleted instance. You can also update the description of a backup to make it easier to manage them.

Transaction log retention

Transaction log retention is in days. For Cloud SQL Enterprise Plus edition instances, the range is from 1 to 35 days, with a default of 14 days. For Cloud SQL Enterprise edition instances, the range is from 1 to 7 days, with a default of 7 days. For both Cloud SQL Enterprise Plus edition and Cloud SQL Enterprise edition instances, the transaction log retention setting must be less than the backup retention setting.

Backups for replicas

Backups aren't available for replica instances. Since replica instances are copies of primary instances, backups are maintained with the primary instance. If a replica instance is promoted to a standalone instance due to a failover or switchover, then the instance is enabled for backups and would require its own backup configuration. Promoted replicas don't inherhit the primary instance's backup configurations and can't access the primary instance's backups.

Backup options

Cloud SQL offers two options of backup services to manage your instance's backups: Standard and Enhanced backups. You can choose between the standard and enhanced backup options based on your instance's requirements and needs. Although instances can't use both backup options at the same time, Cloud SQL gives you the ability to switch between these backup options as necessary.

The following table provides an overview of the features available with each backup option:

Features Standard backups Enhanced backups
Backup vault -
Enforced retention with retention lock -
Retain backups on project deletion -
Centralized backup management across projects -
Backup retention period 1 year Unlimited
Automated backup schedule Daily Hourly, daily, weekly, monthly, yearly
Point-in-time recovery using logs
Cross-region backup & restore. -
On-demand backups
Multi-region backups -
Retain all backups on instance deletion
Final backup in instance deletion
CMEK support -

For more information about these backup options, see Standard backups and Enhanced backups.

Enhanced backups

With enhanced backups, you can use Backup and DR to manage and store all backups for your Cloud SQL instances across various projects in one central backup project. Backup and DR provides centralized management, monitoring, and reporting of day to day backup operations in one place. Backups are stored in a backup vault, which is a Google-managed secured and isolated storage resource, managed by Backup and DR, and backup plans manage the backup and restore settings. This provides immutable and indelible backups that are independent of the source project. For more information about how backups work with Backup and DR, see Backup and DR overview.

Enhanced backups use Backup and DR to create a centralized backup project where you manage the backup plans and backup vault across your Cloud SQL instances. These plans can be linked across multiple projects.

When you attach a backup plan to a Cloud SQL instance, the existing backup and restore settings are overwritten by the backup plan. The plan containing your backup and restore settings is stored in the centralized backup project, and any backups created when the plan is active on your Cloud SQL instance are stored in the backup vault in the backups project.

Since the Backup and DR is managed in a separate Google Cloud project, backups are protected when a source or workload project is deleted. Roles and responsibilities are managed by the Backup and DR Admin and are separate from Cloud SQL Admin roles and responsibilities.

You can retain backups after instance deletion, or take a final backup of your instance prior to deletion. All backups taken as part of enhanced backups can be used to restore an instance while its live, or after it has been deleted.

Backup retention

You can retain backups in a backup vault for up to 99 years when using enhanced backups. The backup vault has a minimum enforced retention period between 1 day and 99 years.

Backup storage

Backups are stored in a centralized location called a backup vault. A backup vault is a secured and isolated storage, managed by the Backup and DR. A Backup vault allow you to retain backups from 1 day to 99 years. For more information, see Backup vaults.

Backup costs

In enhanced backups, the cost for backups are based on the total size of the backup that is stored in the backup vault. These backups are created based on the backup configuration in the instance's associated backup plan. The total cost is calculated by Backup and DR, and based on Backup and DR pricing.

Limitations

The following limitations apply when using enhanced backups:

  • Backup vault and your Cloud SQL instance must be in the same region.
  • Changing an instance's associated backup plan requires changing your instance to standard backups by deleting the existing backup plan association, then associating the new backup plan.
  • You can't create a Disaster Recovery (DR) replica for an instance using enhanced backups.
  • If your instance has a Disaster Recovery (DR) replica, then you can't enable enhanced backups for the instance.
  • You can't associate a backup plan with a replica instance.
  • If your instance is using enhanced backups, then you can't demote the instance to a replica.

Standard backups

Standard backups are backups managed by Cloud SQL with your Cloud SQL instance. Cloud SQL backups are incremental and only contain data that changed after the previous backup was taken. By default, Cloud SQL retains 7 automated backups for each Cloud SQL Enterprise edition instance and 15 automated backups for each Cloud SQL Enterprise Plus edition instance, in addition to on-demand backups. You can configure how many automated backups to retain (from 1 to 365).

As part of deleting an instance, you can retain all backups at instance deletion and take a final backup of your data. This allows you to recreate any instances that you delete. However, if you don't retain backups or take a final backup prior to deleting your instance, then Cloud SQL deletes all instance backups automatically.

Backup retention

On-demand backups aren't automatically deleted. They persist until you delete them manually, or until the instance is deleted. Since on-demand backups aren't automatically deleted, they may have long-term effects on your billing charges.

Automated backups can be retained from 1 to 365 days, by configuring the retention period in your instance's backup settings. While transaction logs are counted in days, automated backups aren't guaranteed to occur within a day.

If you enable backup retention after instance deletion for your on-demand and automated backups, then those backups follow the same retention settings of 1 to 365 days for automated backups, and indefinitely for on-demand backups. For more information, see Retain backups after instance deletion.

Logs are purged once daily, not continuously. When the number of days of log retention is the same as the number of backups, insufficient log retention can result. For example, setting log retention to seven days and backup retention to seven backups means that between six and seven days of logs will be retained.

We recommend setting the number of backups to at least one more than the days of log retention to guarantee a minimum of specified days of log retention.

For more information on how to enable retained backups for your new or existing instances, see Manage retained backups. For more information on how to restore an instance from a retained backup, see Restore from a retained backup.

Backup storage

In a single-region configuration, backups are replicated across the different zones within the region. In a multi-region configuration, it is recommended that backups be in the same region as the instance to minimize latency and avoid potential backup failures due to organization policies, or location-based limitations.

Backups are stored in the same location for instances in both High Availability (HA) or non-HA configurations. In HA configurations, you'll still be able to access your instance's backups in the event of a failover or switchover to the secondary instance.

You can define your backup locations as follows:

  • Default locations that Cloud SQL selects, based on the location of the original instance.
  • Custom locations that you choose when you do not want to use the default location.
Default backup locations

If you do not specify a storage location, your backups are stored in the multiregion that is geographically closest to the location of your Cloud SQL instance. For example, if your Cloud SQL instance is in us-central1, your backups are stored in the us multi-region by default. However, a default location like australia-southeast1 is outside of a multi-region. The closest multi-region is asia.

Custom backup locations

Cloud SQL lets you select a custom location for your backup data. This is useful if your organization needs to comply with data residency regulations that require you to keep your backups within a specific geographic boundary. If your organization has this type of requirement, it probably uses a Resource Location Restriction organizational policy. With this policy, when you try to use a geographic location that does not comply with the policy, you see an alert on the Backups page. If you see this alert, you need to change the backup location to a location the policy allows.

When selecting a custom location for a backup, consider the following:

  • Cost: one cluster in your instance may be in a lower-cost region than the others.
  • Proximity to your application server: you might want to store the backup as close to your serving application as possible.
  • Storage utilization: you need enough storage space to keep your backup as it grows in size. Depending on your workload, you might have clusters of different sizes or with different disk usages. This might factor into which cluster you choose.

For a complete list of valid regional values, see Instance Locations. For a complete list of multi-regional values, see Multi-regional locations.

For more information about setting locations for backups and seeing the locations of backups taken for an instance, see Set a custom location for backups and View backup locations.

Backup rate limitations

Cloud SQL limits the rate for backup operations on the data disk. You are allowed a maximum of five backup operations every 50 minutes per instance per project. If a backup operation fails, it does not count towards this quota. If you reach the limit, the operation fails with an error message that tells you when you can retry.

Let's take a look at how Cloud SQL performs rate limiting for backups.

Cloud SQL uses tokens from a bucket to determine how many backup operations are available at any one time. Each instance has a bucket. There's a maximum of five tokens in the bucket that you can use for backup operations. Every 10 minutes, a new token is added to the bucket. If the bucket is full, the token overflows.

Each time you issue a backup operation, a token is granted from the bucket. If the operation succeeds, the token is removed from the bucket. If it fails, the token is returned to the bucket. The following diagram shows how this works:

How tokens work

Backups versus exports

Backups are managed by Cloud SQL according to retention policies, and are stored separately from the Cloud SQL instance. Cloud SQL backups differ from an export uploaded to Cloud Storage, where you manage the lifecycle. Backups encompass the entire disk of the instance. Exports can select specific contents.

Backup and restore operations can't be used to upgrade a database to a later version. You can only restore from a backup to an instance with the same database version as when the backup was taken.

To upgrade to a later version, consider using the Database Migration Service or exporting and then importing your database to a new Cloud SQL instance.

Backup costs

By default, Cloud SQL retains 7 automated backups for each Cloud SQL Enterprise edition instance and 15 automated backups for each Cloud SQL Enterprise Plus edition instance, in addition to on-demand backups. You can configure how many automated backups to retain (from 1 to 365). We charge a lower rate for backup storage than for other types of instances.

For more information about pricing related to backups, see the pricing page.

Backup size

All Cloud SQL backups, except the first one, are incremental. They contain only data that changed after the previous backup was taken. Your oldest backup is a similar size to your database, but the sizes of subsequent backups depend on the rate of change of your data. When the oldest backup is deleted, the size of the next oldest backup increases to become a full backup and is adjusted to capture the difference between the backups. Each incremental backup following is also updated to match the new full backup.

You can check the size of an individual backup. The backup size represents the billable size for each backup.

Troubleshooting

Issue Troubleshooting
You can't see the current operation's status. The Google Cloud console reports only success or failure when the operation is done. It isn't designed to show warnings or other updates.

Run the gcloud sql operations list command to list all operations for the given Cloud SQL instance.

You want to find out who issued an on-demand backup operation. The user interface doesn't show the user who started an operation.

Look in the logs and filter by text to find the user. You may need to use audit logs for private information. Relevant log files include:

  • cloudsql.googleapis.com/postgres.log
  • If Cloud Audit Logs is enabled and you have the required permissions to view them, cloudaudit.googleapis.com/activity may also be available.
After an instance is deleted, you can't take a backup of the instance.

If you delete an instance without taking a final backup of the data, then no data recovery is possible. However, if you restore the instance, then Cloud SQL also restores the backups. For more information on recovering a deleted instance, see Recovery backups.

If you have done an export operation, create a new instance and then do an import operation to recreate the database. Exports are written to Cloud Storage and imports are read from there.

An automated backup is stuck for many hours and can't be canceled. Backups can take a long time depending on the database size.

If you really need to cancel the operation, you can ask customer support to force restart the instance.

A restore operation can fail when one or more users referenced in the SQL dump file don't exist. Before restoring a SQL dump, all the database users who own objects or were granted permissions on objects in the dumped database must exist in the target database. If they don't, the restore operation fails to recreate the objects with the original ownership or permissions.

Create the database users before restoring the SQL dump.

You want to increase the number of days that you can keep automatic backups from seven to 30 days, or longer. You can configure the number of automated backups to retain, from 1 to 365. Automated backups get pruned regularly based on the retention value configured. Unfortunately, this means that the currently visible backups are the only automated backups you can restore from.

To keep backups indefinitely, you can create an on-demand backup, as they are not deleted in the same way as automated backups. On-demand backups remain indefinitely. That is, they remain until they're deleted or the instance they belong to is deleted. Because that type of backup is not deleted automatically, it can affect billing.

An automated backup failed and you didn't receive an email notification. To have Cloud SQL notify you of the backup's status, configure a log-based alert.
An instance is repeatedly failing because it is cycling between the failure and backup restore states. Attempts to connect to and use the database following restore fail.
  • There could be too many open connections. Too many connections can result from errors that occur in the middle of a connection where there are no autovacuum settings to clean up dead connections.
  • Cycling can occur if any custom code is using retry logic that doesn't stop after a few failures.
  • There could be too much traffic. Use connection pooling and other best practices for connectivity.

Things to try:

  1. Verify that the database is set up for autovacuum.
  2. Check if there is any connection retry logic set up in custom code.
  3. Turn down traffic until the database recovers and then slowly turn traffic back up.
You find you are missing data when performing a backup/restore operation. Tables were created as unlogged. For example:

CREATE UNLOGGED TABLE ....

These tables are not included in a restore from a backup:

  • The contents of unlogged tables doesn't survive failover on an HA instance.
  • Unlogged tables don't survive postgres crashes.
  • Unlogged tables are not replicated to read replicas.
  • Unlogged tables are automatically wiped during backup restore.

The solution is to avoid using unlogged tables if you want to restore those tables through a backup. If you're restoring from a database that already has unlogged tables, then you can dump the database to a file, and reload the data after modifying the dumped file to ALTER TABLE to SET LOGGED on those tables.

What's next