Table of Contents
This chapter describes the syntax for the SQL statements supported by MySQL.
MySQL 8.0 supports atomic Data Definition Language (DDL) statements. This feature is referred to as atomic DDL. An atomic DDL statement combines the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic operation. The operation is either committed, with applicable changes persisted to the data dictionary, storage engine, and binary log, or is rolled back, even if the server halts during the operation.
Atomic DDL is not transactional
DDL. DDL statements, atomic or otherwise, implicitly
end any transaction that is active in the current session, as if
you had done a COMMIT before
executing the statement. This means that DDL statements cannot
be performed within another transaction, within transaction
control statements such as
START TRANSACTION ...
COMMIT, or combined with other statements within the
same transaction.
Atomic DDL is made possible by the introduction of the MySQL data dictionary in MySQL 8.0. In earlier MySQL versions, metadata was stored in metadata files, nontransactional tables, and storage engine-specific dictionaries, which necessitated intermediate commits. Centralized, transactional metadata storage provided by the MySQL data dictionary removed this barrier, making it possible to restructure DDL statement operations to be atomic.
The atomic DDL feature is described under the following topics in this section:
The atomic DDL feature supports both table and non-table DDL
statements. Table-related DDL operations require storage engine
support, whereas non-table DDL operations do not. Currently,
only the InnoDB storage engine supports
atomic DDL.
Supported table DDL statements include
CREATE, ALTER, and
DROP statements for databases,
tablespaces, tables, and indexes, and the
TRUNCATE TABLE statement.
Supported non-table DDL statements include:
The following statements are not supported by the atomic DDL feature:
Table-related DDL statements that involve a storage engine
other than InnoDB.
INSTALL PLUGIN and
UNINSTALL PLUGIN
statements.
INSTALL COMPONENT and
UNINSTALL COMPONENT
statements.
CREATE SERVER,
ALTER SERVER, and
DROP SERVER statements.
The characteristics of atomic DDL statements include the following:
Metadata updates, binary log writes, and storage engine operations, where applicable, are combined into a single atomic operation.
There are no intermediate commits at the SQL layer during the DDL operation.
Where applicable:
The state of data dictionary, routine, event, and UDF caches is consistent with the status of the DDL operation, meaning that caches are updated to reflect whether or not the DDL operation was completed successfully or rolled back.
The storage engine methods involved in a DDL operation do not perform intermediate commits, and the storage engine registers itself as part of the DDL operation.
The storage engine supports redo and rollback of DDL operations, which is performed in the Post-DDL phase of the DDL operation.
The visible behaviour of DDL operations is atomic, which changes the behavior of some DDL statements. See Changes in DDL Statement Behavior.
This section describes changes in DDL statement behavior due to the introduction of atomic DDL support.
DROP TABLE operations are
fully atomic if all named tables use an atomic DDL-supported
storage engine. The statement either drops all tables
successfully or is rolled back.
DROP TABLE fails with an
error if a named table does not exist, and no changes are
made, regardless of the storage engine. This change in
behavior is demonstrated in the following example, where the
DROP TABLE statement fails
because a named table does not exist:
mysql>CREATE TABLE t1 (c1 INT);mysql>DROP TABLE t1, t2;ERROR 1051 (42S02): Unknown table 'test.t2' mysql>SHOW TABLES;+----------------+ | Tables_in_test | +----------------+ | t1 | +----------------+
Prior to the introduction of atomic DDL,
DROP TABLE reports an error
for the named table that does not exist but succeeds for the
named table that does exist:
mysql>CREATE TABLE t1 (c1 INT);mysql>DROP TABLE t1, t2;ERROR 1051 (42S02): Unknown table 'test.t2' mysql>SHOW TABLES;Empty set (0.00 sec)
Due to this change in behavior, a partially completed
DROP TABLE statement on a
MySQL 5.7 master fails when replicated on a MySQL 8.0
slave. To avoid this failure scenario, use IF
EXISTS syntax in DROP
TABLE statements to prevent errors from
occurring for tables that do not exist.
DROP DATABASE is atomic if
all tables use an atomic DDL-supported storage engine. The
statement either drops all objects successfully or is rolled
back. However, removal of the database directory from the
file system occurs last and is not part of the atomic
operation. If removal of the database directory fails due to
a file system error or server halt, the
DROP DATABASE transaction is
not rolled back.
For tables that do not use an atomic DDL-supported storage
engine, table deletion occurs outside of the atomic
DROP TABLE or
DROP DATABASE transaction.
Such table deletions are written to the binary log
individually, which limits the discrepancy between the
storage engine, data dictionary, and binary log to one table
at most in the case of an interrupted
DROP TABLE or
DROP DATABASE operation. For
operations that drop multiple tables, the tables that do not
use an atomic DDL-supported storage engine are dropped
before tables that do.
CREATE TABLE,
ALTER TABLE,
RENAME TABLE,
TRUNCATE TABLE,
CREATE TABLESPACE, and
DROP TABLESPACE operations
for tables that use an atomic DDL-supported storage engine
are either fully committed or rolled back if the server
halts during their operation. In earlier MySQL releases,
interruption of these operations could cause discrepancies
between the storage engine, data dictionary, and binary log,
or leave behind orphan files. RENAME
TABLE operations are only atomic if all named
tables use an atomic DDL-supported storage engine.
DROP VIEW fails if a named
view does not exist, and no changes are made. The change in
behavior is demonstrated in this example, where the
DROP VIEW statement fails
because a named view does not exist:
mysql>CREATE VIEW test.viewA AS SELECT * FROM t;mysql>DROP VIEW test.viewA, test.viewB;ERROR 1051 (42S02): Unknown table 'test.viewB' mysql>SHOW FULL TABLES IN test WHERE TABLE_TYPE LIKE 'VIEW';+----------------+------------+ | Tables_in_test | Table_type | +----------------+------------+ | viewA | VIEW | +----------------+------------+
Prior to the introduction of atomic DDL,
DROP VIEW returns an error
for the named view that does not exist but succeeds for the
named view that does exist:
mysql>CREATE VIEW test.viewA AS SELECT * FROM t;mysql>DROP VIEW test.viewA, test.viewB;ERROR 1051 (42S02): Unknown table 'test.viewB' mysql>SHOW FULL TABLES IN test WHERE TABLE_TYPE LIKE 'VIEW';Empty set (0.00 sec)
Partial execution of account management statements is no longer permitted. Account management statements either succeed for all named users or roll back and have no effect if an error occurs. In earlier MySQL versions, account management statements that name multiple users could succeed for some users and fail for others.
The change in behavior is demonstrated in this example,
where the second CREATE USER
statement returns an error but fails because it cannot
succeed for all named users.
mysql>CREATE USER userA;mysql>CREATE USER userA, userB;ERROR 1396 (HY000): Operation CREATE USER failed for 'userA'@'%' mysql>SELECT User FROM mysql.user WHERE User LIKE 'user%';+-------+ | User | +-------+ | userA | +-------+
Prior to the introduction of atomic DDL, the second
CREATE USER statement returns an error
for the named user that does not exist but succeeds for the
named user that does exist:
mysql>CREATE USER userA;mysql>CREATE USER userA, userB;ERROR 1396 (HY000): Operation CREATE USER failed for 'userA'@'%' mysql>SELECT User FROM mysql.user WHERE User LIKE 'user%';+-------+ | User | +-------+ | userA | | userB | +-------+
Due to this change in behavior, partially completed
account management statements on a MySQL 5.7 master fail
when replicated on a MySQL 8.0 slave. To avoid this
failure scenario, use IF EXISTS or
IF NOT EXISTS syntax, as appropriate,
in account management statements to prevent errors related
to named users.
Currently, only the InnoDB storage engine
supports atomic DDL. Storage engines that do not support atomic
DDL are exempted from DDL atomicity. DDL operations involving
exempted storage engines remain capable of introducing
inconsistencies that can occur when operations are interrupted
or only partially completed.
To support redo and rollback of DDL operations,
InnoDB writes DDL logs to the
mysql.innodb_ddl_log table, which is a hidden
data dictionary table that resides in the
mysql.ibd data dictionary tablespace.
To view DDL logs that are written to the
mysql.innodb_ddl_log table during a DDL
operation, enable the
innodb_print_ddl_logs
configuration option. For more information, see
Viewing DDL Logs.
The redo logs for changes to the
mysql.innodb_ddl_log table are flushed to
disk immediately regardless of the
innodb_flush_log_at_trx_commit
setting. Flushing the redo logs immediately avoids situations
where data files are modified by DDL operations but the redo
logs for changes to the
mysql.innodb_ddl_log table resulting from
those operations are not persisted to disk. Such a situation
could cause errors during rollback or recovery.
The InnoDB storage engine executes DDL
operations in phases. DDL operations such as
ALTER TABLE may perform the
Prepare and Perform
phases multiple times prior to the Commit
phase.
Prepare: Create the required objects
and write the DDL logs to the
mysql.innodb_ddl_log table. The DDL logs
define how to roll forward and roll back the DDL operation.
Perform: Perform the DDL operation. For
example, perform a create routine for a CREATE
TABLE operation.
Commit: Update the data dictionary and commit the data dictionary transaction.
Post-DDL: Replay and remove DDL logs
from the mysql.innodb_ddl_log table. To
ensure that rollback can be performed safely without
introducing inconsistencies, file operations such as
renaming or removing data files are performed in this final
phase. This phase also removes dynamic metadata from the
mysql.innodb_dynamic_metadata data
dictionary table for DROP
TABLE, TRUNCATE
TABLE, and other DDL operations that rebuild the
table.
DDL logs are replayed and removed from the
mysql.innodb_ddl_log table during the
Post-DDL phase, regardless of whether the
DDL operation is committed or rolled back. DDL logs should only
remain in the mysql.innodb_ddl_log table if
the server is halted during a DDL operation. In this case, the
DDL logs are replayed and removed after recovery.
In a recovery situation, a DDL operation may be committed or
rolled back when the server is restarted. If the data dictionary
transaction that was performed during the
Commit phase of a DDL operation is present
in the redo log and binary log, the operation is considered
successful and is rolled forward. Otherwise, the incomplete data
dictionary transaction is rolled back when
InnoDB replays data dictionary redo logs, and
the DDL operation is rolled back.
To view DDL logs that are written to the
mysql.innodb_ddl_log data dictionary table
during atomic DDL operations that involve the
InnoDB storage engine, enable
innodb_print_ddl_logs to have
MySQL write the DDL logs to stderr. Depending
on the host operating system and MySQL configuration,
stderr may be the error log, terminal, or
console window. See
Section 5.4.2.2, “Default Error Log Destination Configuration”.
InnoDB writes DDL logs to the
mysql.innodb_ddl_log table to support redo
and rollback of DDL operations. The
mysql.innodb_ddl_log table is a hidden data
dictionary table that resides in the
mysql.ibd data dictionary tablespace. Like
other hidden data dictionary tables, the
mysql.innodb_ddl_log table cannot be accessed
directly in non-debug versions of MySQL. (See
Section 14.1, “Data Dictionary Schema”.) The structure of the
mysql.innodb_ddl_log table corresponds to
this definition:
CREATE TABLE mysql.innodb_ddl_log ( id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, thread_id BIGINT UNSIGNED NOT NULL, type INT UNSIGNED NOT NULL, space_id INT UNSIGNED, page_no INT UNSIGNED, index_id BIGINT UNSIGNED, table_id BIGINT UNSIGNED, old_file_path VARCHAR(512) COLLATE UTF8_BIN, new_file_path VARCHAR(512) COLLATE UTF8_BIN, KEY(thread_id) );
id: A unique identifier for a DDL log
record.
thread_id: Each DDL log record is
assigned a thread_id, which is used to
replay and remove DDL logs that belong to a particular DDL
operation. DDL operations that involve multiple data file
operations generate multiple DDL log records.
type: The DDL operation type. Types
include FREE (drop an index tree),
DELETE (delete a file),
RENAME (rename a file), or
DROP (drop metadata from the
mysql.innodb_dynamic_metadata data
dictionary table).
space_id: The tablespace ID.
page_no: A page that contains allocation
information; an index tree root page, for example.
index_id: The index ID.
table_id: The table ID.
old_file_path: The old tablespace file
path. Used by DDL operations that create or drop tablespace
files; also used by DDL operations that rename a tablespace.
new_file_path: The new tablespace file
path. Used by DDL operations that rename tablespace files.
This example demonstrates enabling
innodb_print_ddl_logs to view
DDL logs written to strderr for a
CREATE TABLE operation.
mysql> SET GLOBAL innodb_print_ddl_logs=1; mysql> CREATE TABLE t1 (c1 INT) ENGINE = InnoDB;
[Note] [000000] InnoDB: DDL log insert : [DDL record: DELETE SPACE, id=18, thread_id=7, space_id=5, old_file_path=./test/t1.ibd] [Note] [000000] InnoDB: DDL log delete : by id 18 [Note] [000000] InnoDB: DDL log insert : [DDL record: REMOVE CACHE, id=19, thread_id=7, table_id=1058, new_file_path=test/t1] [Note] [000000] InnoDB: DDL log delete : by id 19 [Note] [000000] InnoDB: DDL log insert : [DDL record: FREE, id=20, thread_id=7, space_id=5, index_id=132, page_no=4] [Note] [000000] InnoDB: DDL log delete : by id 20 [Note] [000000] InnoDB: DDL log post ddl : begin for thread id : 7 [Note] [000000] InnoDB: DDL log post ddl : end for thread id : 7
ALTER {DATABASE | SCHEMA} [db_name]
alter_option ...
alter_option: {
[DEFAULT] CHARACTER SET [=] charset_name
| [DEFAULT] COLLATE [=] collation_name
| [DEFAULT] ENCRYPTION [=] {'Y' | 'N'}
| READ ONLY [=] {DEFAULT | 0 | 1}
}
ALTER DATABASE enables you to
change the overall characteristics of a database. These
characteristics are stored in the data dictionary. This statement
requires the ALTER privilege on the
database. ALTER
SCHEMA is a synonym for ALTER
DATABASE.
If the database name is omitted, the statement applies to the default database. In that case, an error occurs if there is no default database.
For any alter_option omitted from the
statement, the database retains its current option value, with the
exception that changing the character set may change the collation
and vice versa.
The CHARACTER SET option changes the default
database character set. The COLLATE option
changes the default database collation. For information about
character set and collation names, see Chapter 10, Character Sets, Collations, Unicode.
To see the available character sets and collations, use the
SHOW CHARACTER SET and
SHOW COLLATION statements,
respectively. See Section 13.7.7.3, “SHOW CHARACTER SET Statement”, and
Section 13.7.7.4, “SHOW COLLATION Statement”.
A stored routine that uses the database defaults when the routine is created includes those defaults as part of its definition. (In a stored routine, variables with character data types use the database defaults if the character set or collation are not specified explicitly. See Section 13.1.17, “CREATE PROCEDURE and CREATE FUNCTION Statements”.) If you change the default character set or collation for a database, any stored routines that are to use the new defaults must be dropped and recreated.
The ENCRYPTION option, introduced in MySQL
8.0.16, defines the default database encryption, which is
inherited by tables created in the database. The permitted values
are 'Y' (encryption enabled) and
'N' (encryption disabled). Only newly created
tables inherit the default database encryption. For existing
tables associated with the database, their encryption remains
unchanged. If the
table_encryption_privilege_check
system variable is enabled, the
TABLE_ENCRYPTION_ADMIN privilege is
required to specify a default encryption setting that differs from
the value of the
default_table_encryption system
variable. For more information, see
Defining an Encryption Default for Schemas and General Tablespaces.
The READ ONLY option, introduced in MySQL
8.0.22, controls whether to permit modification of the database
and objects within it. The permitted values are
DEFAULT or 0 (not read only)
and 1 (read only). This option is useful for
database migration because a database for which READ
ONLY is enabled can be migrated to another MySQL
instance without concern that the database might be changed during
the operation.
With NDB Cluster, making a database read only on one mysqld server is synchronized to other mysqld servers in the same cluster, so that the database becomes read only on all mysqld servers.
The READ ONLY option, if enabled, is displayed
in the INFORMATION_SCHEMA
SCHEMATA_EXTENSIONS table. See
Section 25.31, “The INFORMATION_SCHEMA SCHEMATA_EXTENSIONS Table”.
The READ ONLY option cannot be enabled for
these system schemas: mysql,
information_schema,
performance_schema.
In ALTER DATABASE statements, the
READ ONLY option interacts with other instances
of itself and with other options as follows:
An error occurs if multiple instances of READ
ONLY conflict (for example, READ ONLY = 1
READ ONLY = 0).
An ALTER DATABASE statement
that contains only (nonconflicting) READ
ONLY options is permitted even for a read-only
database.
A mix of (nonconflicting) READ ONLY options
with other options is permitted if the read-only state of the
database either before or after the statement permits
modifications. If the read-only state both before and after
prohibits changes, an error occurs.
This statement succeeds whether or not the database is read only:
ALTER DATABASE mydb READ ONLY = 0 DEFAULT COLLATE utf8mb4_bin;
This statement succeeds if the database is not read only, but fails if it is already read only:
ALTER DATABASE mydb READ ONLY = 1 DEFAULT COLLATE utf8mb4_bin;
Enabling READ ONLY affects all users of the
database, with these exceptions that are not subject to read-only
checks:
Statements executed by the server as part of server initialization, restart, upgrade, or replication.
Statements in a file named at server startup by the
init_file system variable.
TEMPORARY tables; it is possible to create,
alter, drop, and write to TEMPORARY tables
in a read-only database.
NDB Cluster non-SQL inserts and updates.
Other than for the excepted operations just listed, enabling
READ ONLY prohibits write operations to the
database and its objects, including their definitions, data, and
metadata. The following list details affected SQL statements and
operations:
The database itself:
ALTER DATABASE (except to
change the READ ONLY option)
Views:
Selecting from views that invoke functions with side effects.
Updating updatable views.
Statements that create or drop objects in a writable database are rejected if they affect metadata of a view in a read-only database (for example, by making the view valid or invalid).
Stored routines:
CALL (of procedures with
side effects)
SELECT (of functions with
side effects)
For procedures and functions, read-only checks follow
prelocking behavior. For
CALL statements, read-only
checks are done on a per-statement basis, so if some
conditionally executed statement writing to a read-only
database does not actually execute, the call still
succeeds. On the other hand, for a function called within
a SELECT, execution of the
function body happens in prelocked mode. As long as a some
statement within the function writes to a read-only
database, execution of the function fails with an error
regardless of whether the statement actually executes.
Triggers:
Trigger invocation.
Events:
Event execution:
Executing an event in the database fails because that would change the last-execution timestamp, which is event metadata stored in the data dictionary. Failure of event execution also has the effect of causing the event scheduler to stop.
If an event writes to an object in a read-only database, execution of the event fails with an error, but the event scheduler is not stopped.
Tables:
For cascading foreign keys where the child table is in a read-only database, updates and deletes on the parent are rejected even if the child table is not directly affected.
For a MERGE table such as
CREATE TABLE s1.t(i int) ENGINE MERGE UNION
(s2.t, s3.t), INSERT_METHOD=..., the following
behavior applies:
Inserting into the MERGE table
(INSERT into s1.t) fails if at
least one of s1,
s2, s3 is read
only, regardless of insert method. The insert is
refused even if it would actually end up in a writable
table.
Dropping the MERGE table
(DROP TABLE s1.t) succeeds as long
as s1 is not read only. It is
permitted to drop a MERGE table
that refers to a read-only database.
An ALTER DATABASE statement blocks
until all concurrent transactions that have already accessed an
object in the database being altered have committed. Conversely, a
write transaction accessing an object in a database being altered
in a concurrent ALTER DATABASE
blocks until the ALTER DATABASE has
committed.
If the Clone plugin is used to clone a local or remote data
directory, the databases in the clone retain the read-only state
they had in the source data directory. The read-only state does
not affect the cloning process itself. If it is not desirable to
have the same database read-only state in the clone, the option
must be changed explicitly for the clone after the cloning process
has finished, using ALTER DATABASE
operations on the clone.
When cloning from a donor to a recipient, if the recipient has a user database that is read only, cloning fails with an error message. Cloning may be retried after making the database writable.
READ ONLY is permitted for
ALTER DATABASE, but not for
CREATE DATABASE. However, for a
read-only database, the statement produced by
SHOW CREATE DATABASE does include
READ ONLY=1 within a comment to indicate its
read-only status:
mysql>ALTER DATABASE mydb READ ONLY = 1;mysql>SHOW CREATE DATABASE mydb\G*************************** 1. row *************************** Database: mydb Create Database: CREATE DATABASE `mydb` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */ /* READ ONLY = 1 */
If the server executes a CREATE
DATABASE statement containing such a comment, the server
ignores the comment and the READ ONLY option is
not processed. This has implications for
mysqldump and mysqlpump,
which use SHOW CREATE DATABASE to
produce CREATE DATABASE statements
in dump output:
In a dump file, the CREATE
DATABASE statement for a read-only database contains
the commented READ ONLY option.
The dump file can be restored as usual, but because the server
ignores the commented READ ONLY option, the
restored database is not read only. If
the database is to be read ony after being restored, you must
execute ALTER DATABASE manually
to make it so.
Suppose that mydb is read only and you dump it
as follows:
shell> mysqldump --databases mydb > mydb.sql
A restore operation later must be followed by
ALTER DATABASE if
mydb should still be read only:
shell>mysqlmysql>SOURCE mydb.sql;mysql>ALTER DATABASE mydb READ ONLY = 1;
MySQL Enterprise Backup is not subject to this issue. It backs up and restores a
read-only database like any other, but enables the READ
ONLY option at restore time if it was enabled at backup
time.
ALTER DATABASE is written to the
binary log, so a change to the READ ONLY option
on a replication master also affects replication slaves. To
prevent this from happening, binary logging must be disabled prior
to execution of the ALTER DATABASE
statement. For example, to prepare for migrating a database
without affecting replication slaves, perform these operations:
Within a single session, disable binary logging and enable
READ ONLY for the database:
mysql>SET sql_log_bin = OFF;mysql>ALTER DATABASE mydb READ ONLY = 1;
Dump the database, for example, with mysqldump or mysqlpump:
shell> mysqldump --databases mydb > mydb.sql
Within a single session, disable binary logging and disable
READ ONLY for the database:
mysql>SET sql_log_bin = OFF;mysql>ALTER DATABASE mydb READ ONLY = 0;
ALTER
[DEFINER = user]
EVENT event_name
[ON SCHEDULE schedule]
[ON COMPLETION [NOT] PRESERVE]
[RENAME TO new_event_name]
[ENABLE | DISABLE | DISABLE ON SLAVE]
[COMMENT 'string']
[DO event_body]
The ALTER EVENT statement changes
one or more of the characteristics of an existing event without
the need to drop and recreate it. The syntax for each of the
DEFINER, ON SCHEDULE,
ON COMPLETION, COMMENT,
ENABLE / DISABLE, and
DO clauses is exactly the same as
when used with CREATE EVENT. (See
Section 13.1.13, “CREATE EVENT Statement”.)
Any user can alter an event defined on a database for which that
user has the EVENT privilege. When
a user executes a successful ALTER
EVENT statement, that user becomes the definer for the
affected event.
ALTER EVENT works only with an
existing event:
mysql>ALTER EVENT no_such_event>ON SCHEDULE>EVERY '2:3' DAY_HOUR;ERROR 1517 (HY000): Unknown event 'no_such_event'
In each of the following examples, assume that the event named
myevent is defined as shown here:
CREATE EVENT myevent
ON SCHEDULE
EVERY 6 HOUR
COMMENT 'A sample comment.'
DO
UPDATE myschema.mytable SET mycol = mycol + 1;
The following statement changes the schedule for
myevent from once every six hours starting
immediately to once every twelve hours, starting four hours from
the time the statement is run:
ALTER EVENT myevent
ON SCHEDULE
EVERY 12 HOUR
STARTS CURRENT_TIMESTAMP + INTERVAL 4 HOUR;
It is possible to change multiple characteristics of an event in a
single statement. This example changes the SQL statement executed
by myevent to one that deletes all records from
mytable; it also changes the schedule for the
event such that it executes once, one day after this
ALTER EVENT statement is run.
ALTER EVENT myevent
ON SCHEDULE
AT CURRENT_TIMESTAMP + INTERVAL 1 DAY
DO
TRUNCATE TABLE myschema.mytable;
Specify the options in an ALTER
EVENT statement only for those characteristics that you
want to change; omitted options keep their existing values. This
includes any default values for CREATE
EVENT such as ENABLE.
To disable myevent, use this
ALTER EVENT statement:
ALTER EVENT myevent
DISABLE;
The ON SCHEDULE clause may use expressions
involving built-in MySQL functions and user variables to obtain
any of the timestamp or
interval values which it contains. You
cannot use stored routines or user-defined functions in such
expressions, and you cannot use any table references; however, you
can use SELECT FROM DUAL. This is true for both
ALTER EVENT and
CREATE EVENT statements. References
to stored routines, user-defined functions, and tables in such
cases are specifically not permitted, and fail with an error (see
Bug #22830).
Although an ALTER EVENT statement
that contains another ALTER EVENT
statement in its DO clause appears
to succeed, when the server attempts to execute the resulting
scheduled event, the execution fails with an error.
To rename an event, use the ALTER
EVENT statement's RENAME TO clause.
This statement renames the event myevent to
yourevent:
ALTER EVENT myevent
RENAME TO yourevent;
You can also move an event to a different database using
ALTER EVENT ... RENAME TO ... and
notation, as shown here:
db_name.event_name
ALTER EVENT olddb.myevent
RENAME TO newdb.myevent;
To execute the previous statement, the user executing it must have
the EVENT privilege on both the
olddb and newdb databases.
There is no RENAME EVENT statement.
The value DISABLE ON SLAVE is used on a
replication slave instead of ENABLE or
DISABLE to indicate an event that was created
on the master and replicated to the slave, but that is not
executed on the slave. Normally, DISABLE ON
SLAVE is set automatically as required; however, there
are some circumstances under which you may want or need to change
it manually. See Section 17.5.1.16, “Replication of Invoked Features”,
for more information.
ALTER FUNCTIONfunc_name[characteristic...]characteristic: { COMMENT 'string' | LANGUAGE SQL | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER } }
This statement can be used to change the characteristics of a
stored function. More than one change may be specified in an
ALTER FUNCTION statement. However,
you cannot change the parameters or body of a stored function
using this statement; to make such changes, you must drop and
re-create the function using DROP
FUNCTION and CREATE
FUNCTION.
You must have the ALTER ROUTINE
privilege for the function. (That privilege is granted
automatically to the function creator.) If binary logging is
enabled, the ALTER FUNCTION
statement might also require the
SUPER privilege, as described in
Section 24.7, “Stored Program Binary Logging”.
ALTER INSTANCEinstance_actioninstance_action: { ROTATE INNODB MASTER KEY | ROTATE BINLOG MASTER KEY | RELOAD TLS [FOR CHANNEL {mysql_main | mysql_admin}] [NO ROLLBACK ON ERROR] }
ALTER INSTANCE defines actions applicable to a
MySQL server instance. The statement supports these actions:
ALTER INSTANCE ROTATE INNODB MASTER KEY
This action rotates the master encryption key used for
InnoDB tablespace encryption. Key rotation
requires the
ENCRYPTION_KEY_ADMIN or
SUPER privilege. To perform
this action, a keyring plugin must be installed and
configured. For instructions, see Section 6.4.4, “The MySQL Keyring”.
ALTER INSTANCE ROTATE INNODB MASTER KEY
supports concurrent DML. However, it cannot be run
concurrently with
CREATE TABLE ...
ENCRYPTION or
ALTER TABLE ...
ENCRYPTION operations, and locks are taken to
prevent conflicts that could arise from concurrent execution
of these statements. If one of the conflicting statements is
running, it must complete before another can proceed.
ALTER INSTANCE ROTATE INNODB MASTER KEY
statements are written to the binary log so that they can be
executed on replicated servers.
For additional ALTER INSTANCE ROTATE INNODB MASTER
KEY usage information, see
Section 15.13, “InnoDB Data-at-Rest Encryption”.
ALTER INSTANCE ROTATE BINLOG MASTER KEY
This action rotates the binary log master key used for binary
log encryption. Key rotation for the binary log master key
requires the
BINLOG_ENCRYPTION_ADMIN or
SUPER privilege. The statement
cannot be used if the
binlog_encryption system
variable is set to OFF. To perform this
action, a keyring plugin must be installed and configured. For
instructions, see Section 6.4.4, “The MySQL Keyring”.
ALTER INSTANCE ROTATE BINLOG MASTER KEY
actions are not written to the binary log and are not executed
on replication slaves. Binary log master key rotation can
therefore be carried out in replication environments including
a mix of MySQL versions. To schedule regular rotation of the
binary log master key on all applicable master and slave
servers, you can enable the MySQL Event Scheduler on each
server and issue the ALTER INSTANCE ROTATE BINLOG
MASTER KEY statement using a
CREATE EVENT statement. If you
rotate the binary log master key because you suspect that the
current or any of the previous binary log master keys might
have been compromised, issue the statement on every applicable
master and slave server, which enables you to verify immediate
compliance.
For additional ALTER INSTANCE ROTATE BINLOG MASTER
KEY usage information, including what to do if the
process does not complete correctly or is interrupted by an
unexpected server halt, see
Section 17.3.2, “Encrypting Binary Log Files and Relay Log Files”.
This action reconfigures a TLS context from the current values
of the system variables that define the context. It also
updates the status variables that reflect the active context
values. This action requires the
CONNECTION_ADMIN privilege. For
additional information about reconfiguring the TLS context,
including which system and status variables are
context-related, see
Server-Side Runtime Configuration and Monitoring for Encrypted
Connections.
By default, the statement reloads the TLS context for the main
connection interface. If the FOR CHANNEL
clause (available as of MySQL 8.0.21) is given, the statement
reloads the TLS context for the named channel:
mysql_main for the main connection
interface, mysql_admin for the
administrative connection interface. For information about the
different interfaces, see
Section 5.1.12.1, “Connection Interfaces”. The updated TLS
context properties are exposed in the Performance Schema
tls_channel_status table. See
Section 26.12.19.6, “The tls_channel_status Table”.
Updating the TLS context for the main interface may also affect the administrative interface because unless some nondefault TLS value is configured for that interface, it uses the same TLS context as the main interface.
By default, the RELOAD TLS action rolls
back with an error and has no effect if the configuration
values do not permit creation of the new TLS context. The
previous context values continue to be used for new
connections. If the optional NO ROLLBACK ON
ERROR clause is given and the new context cannot be
created, rollback does not occur. Instead, a warning is
generated and encryption is disabled for new connections on
the interface to which the statement applies.
ALTER INSTANCE RELOAD TLS statements are
not written to the binary log (and thus are not replicated).
TLS configuration is local and depends on local files not
necessarily present on all servers involved.
ALTER LOGFILE GROUPlogfile_groupADD UNDOFILE 'file_name' [INITIAL_SIZE [=]size] [WAIT] ENGINE [=]engine_name
This statement adds an UNDO file named
'file_name' to an existing log file
group logfile_group. An
ALTER LOGFILE GROUP statement has
one and only one ADD UNDOFILE clause. No
DROP UNDOFILE clause is currently supported.
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and an undo log file with the same name, or an undo log file and a data file with the same name.
The optional INITIAL_SIZE parameter sets the
UNDO file's initial size in bytes; if not
specified, the initial size defaults to 134217728 (128 MB). You
may optionally follow size with a
one-letter abbreviation for an order of magnitude, similar to
those used in my.cnf. Generally, this is one
of the letters M (megabytes) or
G (gigabytes). (Bug #13116514, Bug #16104705,
Bug #62858)
On 32-bit systems, the maximum supported value for
INITIAL_SIZE is 4294967296 (4 GB). (Bug #29186)
The minimum allowed value for INITIAL_SIZE is
1048576 (1 MB). (Bug #29574)
WAIT is parsed but otherwise ignored. This
keyword currently has no effect, and is intended for future
expansion.
The ENGINE parameter (required) determines the
storage engine which is used by this log file group, with
engine_name being the name of the
storage engine. Currently, the only accepted values for
engine_name are
“NDBCLUSTER” and
“NDB”. The two values
are equivalent.
Here is an example, which assumes that the log file group
lg_3 has already been created using
CREATE LOGFILE GROUP (see
Section 13.1.16, “CREATE LOGFILE GROUP Statement”):
ALTER LOGFILE GROUP lg_3
ADD UNDOFILE 'undo_10.dat'
INITIAL_SIZE=32M
ENGINE=NDBCLUSTER;
When ALTER LOGFILE GROUP is used
with ENGINE = NDBCLUSTER (alternatively,
ENGINE = NDB), an UNDO log
file is created on each NDB Cluster data node. You can verify that
the UNDO files were created and obtain
information about them by querying the
INFORMATION_SCHEMA.FILES table. For
example:
mysql>SELECT FILE_NAME, LOGFILE_GROUP_NUMBER, EXTRA->FROM INFORMATION_SCHEMA.FILES->WHERE LOGFILE_GROUP_NAME = 'lg_3';+-------------+----------------------+----------------+ | FILE_NAME | LOGFILE_GROUP_NUMBER | EXTRA | +-------------+----------------------+----------------+ | newdata.dat | 0 | CLUSTER_NODE=3 | | newdata.dat | 0 | CLUSTER_NODE=4 | | undo_10.dat | 11 | CLUSTER_NODE=3 | | undo_10.dat | 11 | CLUSTER_NODE=4 | +-------------+----------------------+----------------+ 4 rows in set (0.01 sec)
(See Section 25.14, “The INFORMATION_SCHEMA FILES Table”.)
Memory used for UNDO_BUFFER_SIZE comes from the
global pool whose size is determined by the value of the
SharedGlobalMemory data
node configuration parameter. This includes any default value
implied for this option by the setting of the
InitialLogFileGroup data
node configuration parameter.
ALTER LOGFILE GROUP is useful only
with Disk Data storage for NDB Cluster. For more information, see
Section 22.5.10, “NDB Cluster Disk Data Tables”.
ALTER PROCEDUREproc_name[characteristic...]characteristic: { COMMENT 'string' | LANGUAGE SQL | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER } }
This statement can be used to change the characteristics of a
stored procedure. More than one change may be specified in an
ALTER PROCEDURE statement. However,
you cannot change the parameters or body of a stored procedure
using this statement; to make such changes, you must drop and
re-create the procedure using DROP
PROCEDURE and CREATE
PROCEDURE.
You must have the ALTER ROUTINE
privilege for the procedure. By default, that privilege is granted
automatically to the procedure creator. This behavior can be
changed by disabling the
automatic_sp_privileges system
variable. See Section 24.2.2, “Stored Routines and MySQL Privileges”.
ALTER SERVERserver_nameOPTIONS (option[,option] ...)
Alters the server information for
,
adjusting any of the options permitted in the
server_nameCREATE SERVER statement. The
corresponding fields in the mysql.servers table
are updated accordingly. This statement requires the
SUPER privilege.
For example, to update the USER option:
ALTER SERVER s OPTIONS (USER 'sally');
ALTER SERVER causes an implicit commit. See
Section 13.3.3, “Statements That Cause an Implicit Commit”.
ALTER SERVER is not written to the binary log,
regardless of the logging format that is in use.
ALTER TABLEtbl_name[alter_option[,alter_option] ...] [partition_options]alter_option: {table_options| ADD [COLUMN]col_namecolumn_definition[FIRST | AFTERcol_name] | ADD [COLUMN] (col_namecolumn_definition,...) | ADD {INDEX | KEY} [index_name] [index_type] (key_part,...) [index_option] ... | ADD {FULLTEXT | SPATIAL} [INDEX | KEY] [index_name] (key_part,...) [index_option] ... | ADD [CONSTRAINT [symbol]] PRIMARY KEY [index_type] (key_part,...) [index_option] ... | ADD [CONSTRAINT [symbol]] UNIQUE [INDEX | KEY] [index_name] [index_type] (key_part,...) [index_option] ... | ADD [CONSTRAINT [symbol]] FOREIGN KEY [index_name] (col_name,...)reference_definition| ADD [CONSTRAINT [symbol]] CHECK (expr) [[NOT] ENFORCED] | DROP {CHECK | CONSTRAINT}symbol| ALTER {CHECK | CONSTRAINT}symbol[NOT] ENFORCED | ALGORITHM [=] {DEFAULT | INSTANT | INPLACE | COPY} | ALTER [COLUMN]col_name{SET DEFAULT {literal| (expr)} | DROP DEFAULT} | ALTER INDEXindex_name{VISIBLE | INVISIBLE} | CHANGE [COLUMN]old_col_namenew_col_namecolumn_definition[FIRST | AFTERcol_name] | [DEFAULT] CHARACTER SET [=]charset_name[COLLATE [=]collation_name] | CONVERT TO CHARACTER SETcharset_name[COLLATEcollation_name] | {DISABLE | ENABLE} KEYS | {DISCARD | IMPORT} TABLESPACE | DROP [COLUMN]col_name| DROP {INDEX | KEY}index_name| DROP PRIMARY KEY | DROP FOREIGN KEYfk_symbol| FORCE | LOCK [=] {DEFAULT | NONE | SHARED | EXCLUSIVE} | MODIFY [COLUMN]col_namecolumn_definition[FIRST | AFTERcol_name] | ORDER BYcol_name[,col_name] ... | RENAME COLUMNold_col_nameTOnew_col_name| RENAME {INDEX | KEY}old_index_nameTOnew_index_name| RENAME [TO | AS]new_tbl_name| {WITHOUT | WITH} VALIDATION }partition_options:partition_option[partition_option] ...partition_option: { ADD PARTITION (partition_definition) | DROP PARTITIONpartition_names| DISCARD PARTITION {partition_names| ALL} TABLESPACE | IMPORT PARTITION {partition_names| ALL} TABLESPACE | TRUNCATE PARTITION {partition_names| ALL} | COALESCE PARTITIONnumber| REORGANIZE PARTITIONpartition_namesINTO (partition_definitions) | EXCHANGE PARTITIONpartition_nameWITH TABLEtbl_name[{WITH | WITHOUT} VALIDATION] | ANALYZE PARTITION {partition_names| ALL} | CHECK PARTITION {partition_names| ALL} | OPTIMIZE PARTITION {partition_names| ALL} | REBUILD PARTITION {partition_names| ALL} | REPAIR PARTITION {partition_names| ALL} | REMOVE PARTITIONING }key_part: {col_name[(length)] | (expr)} [ASC | DESC]index_type: USING {BTREE | HASH}index_option: { KEY_BLOCK_SIZE [=]value|index_type| WITH PARSERparser_name| COMMENT 'string' | {VISIBLE | INVISIBLE} }table_options:table_option[[,]table_option] ...table_option: { AUTO_INCREMENT [=]value| AVG_ROW_LENGTH [=]value| [DEFAULT] CHARACTER SET [=]charset_name| CHECKSUM [=] {0 | 1} | [DEFAULT] COLLATE [=]collation_name| COMMENT [=] 'string' | COMPRESSION [=] {'ZLIB' | 'LZ4' | 'NONE'} | CONNECTION [=] 'connect_string' | {DATA | INDEX} DIRECTORY [=] 'absolute path to directory' | DELAY_KEY_WRITE [=] {0 | 1} | ENCRYPTION [=] {'Y' | 'N'} | ENGINE [=]engine_name| INSERT_METHOD [=] { NO | FIRST | LAST } | KEY_BLOCK_SIZE [=]value| MAX_ROWS [=]value| MIN_ROWS [=]value| PACK_KEYS [=] {0 | 1 | DEFAULT} | PASSWORD [=] 'string' | ROW_FORMAT [=] {DEFAULT | DYNAMIC | FIXED | COMPRESSED | REDUNDANT | COMPACT} | STATS_AUTO_RECALC [=] {DEFAULT | 0 | 1} | STATS_PERSISTENT [=] {DEFAULT | 0 | 1} | STATS_SAMPLE_PAGES [=]value| TABLESPACEtablespace_name[STORAGE {DISK | MEMORY}] | UNION [=] (tbl_name[,tbl_name]...) }partition_options: (seeCREATE TABLEoptions)
ALTER TABLE changes the structure
of a table. For example, you can add or delete columns, create or
destroy indexes, change the type of existing columns, or rename
columns or the table itself. You can also change characteristics
such as the storage engine used for the table or the table
comment.
To use ALTER TABLE, you need
ALTER,
CREATE, and
INSERT privileges for the
table. Renaming a table requires
ALTER and
DROP on the old table,
ALTER,
CREATE, and
INSERT on the new table.
Following the table name, specify the alterations to be made.
If none are given, ALTER TABLE
does nothing.
The syntax for many of the permissible alterations is similar
to clauses of the CREATE TABLE
statement. column_definition
clauses use the same syntax for ADD and
CHANGE as for CREATE
TABLE. For more information, see
Section 13.1.20, “CREATE TABLE Statement”.
The word COLUMN is optional and can be
omitted, except for RENAME COLUMN (to
distinguish a column-renaming operation from the
RENAME table-renaming operation).
Multiple ADD, ALTER,
DROP, and CHANGE clauses
are permitted in a single ALTER
TABLE statement, separated by commas. This is a
MySQL extension to standard SQL, which permits only one of
each clause per ALTER TABLE
statement. For example, to drop multiple columns in a single
statement, do this:
ALTER TABLE t2 DROP COLUMN c, DROP COLUMN d;
If a storage engine does not support an attempted
ALTER TABLE operation, a
warning may result. Such warnings can be displayed with
SHOW WARNINGS. See
Section 13.7.7.40, “SHOW WARNINGS Statement”. For information on
troubleshooting ALTER TABLE,
see Section B.4.6.1, “Problems with ALTER TABLE”.
For information about generated columns, see Section 13.1.9.2, “ALTER TABLE and Generated Columns”.
For usage examples, see Section 13.1.9.3, “ALTER TABLE Examples”.
InnoDB in MySQL 8.0.17 and later supports
addition of multi-valued indexes on JSON columns using a
key_part specification can take the
form (CAST . See
Multi-Valued Indexes, for detailed
information regarding multi-valued index creation and usage
of, as well as restrictions and limitations on multi-valued
indexes.
json_path AS
type ARRAY)
With the mysql_info() C API
function, you can find out how many rows were copied by
ALTER TABLE. See
Section 28.7.6.36, “mysql_info()”.
There are several additional aspects to the ALTER
TABLE statement, described under the following topics in
this section:
table_options signifies table options
of the kind that can be used in the CREATE
TABLE statement, such as ENGINE,
AUTO_INCREMENT,
AVG_ROW_LENGTH, MAX_ROWS,
ROW_FORMAT, or TABLESPACE.
For descriptions of all table options, see
Section 13.1.20, “CREATE TABLE Statement”. However,
ALTER TABLE ignores DATA
DIRECTORY and INDEX DIRECTORY when
given as table options. ALTER TABLE
permits them only as partitioning options, and requires that you
have the FILE privilege.
Use of table options with ALTER
TABLE provides a convenient way of altering single table
characteristics. For example:
If t1 is currently not an
InnoDB table, this statement changes its
storage engine to InnoDB:
ALTER TABLE t1 ENGINE = InnoDB;
See Section 15.6.1.5, “Converting Tables from MyISAM to InnoDB” for
considerations when switching tables to the
InnoDB storage engine.
When you specify an ENGINE clause,
ALTER TABLE rebuilds the
table. This is true even if the table already has the
specified storage engine.
Running ALTER
TABLE on an existing
tbl_name
ENGINE=INNODBInnoDB table performs a
“null” ALTER
TABLE operation, which can be used to defragment
an InnoDB table, as described in
Section 15.11.4, “Defragmenting a Table”. Running
ALTER TABLE
on an
tbl_name FORCEInnoDB table performs the same
function.
ALTER TABLE
and
tbl_name
ENGINE=INNODBALTER TABLE
use
online DDL. For
more information, see Section 15.12, “InnoDB and Online DDL”.
tbl_name FORCE
The outcome of attempting to change the storage engine of
a table is affected by whether the desired storage engine
is available and the setting of the
NO_ENGINE_SUBSTITUTION
SQL mode, as described in Section 5.1.11, “Server SQL Modes”.
To prevent inadvertent loss of data,
ALTER TABLE cannot be used
to change the storage engine of a table to
MERGE or BLACKHOLE.
To change the InnoDB table to use
compressed row-storage format:
ALTER TABLE t1 ROW_FORMAT = COMPRESSED;
The ENCRYPTION clause enables or disables
page-level data encryption for an InnoDB
table. A keyring plugin must be installed and configured to
enable encryption.
If the
table_encryption_privilege_check
variable is enabled, the
TABLE_ENCRYPTION_ADMIN
privilege is required to use an ENCRYPTION
clause with a setting that differs from the default schema
encryption setting.
Prior to MySQL 8.0.16, the ENCRYPTION
clause was only supported when altering tables residing in
file-per-table tablespaces. As of MySQL 8.0.16, the
ENCRYPTION clause is also supported for
tables residing in general tablespaces.
For tables that reside in general tablespaces, table and tablespace encryption must match.
Altering table encryption by moving a table to a different
tablespace or changing the storage engine is not permitted
without explicitly specifying an ENCRYPTION
clause.
As of MySQL 8.0.16, specifying an
ENCRYPTION clause with a value other than
'N' or '' is not
permitted if the table uses a storage engine that does not
support encryption. Previously, the clause was accepted.
Attempting to create a table without an
ENCRYPTION clause in an encryption-enabled
schema using a storage engine that does not support encryption
is also not permitted.
For more information, see Section 15.13, “InnoDB Data-at-Rest Encryption”.
To reset the current auto-increment value:
ALTER TABLE t1 AUTO_INCREMENT = 13;
You cannot reset the counter to a value less than or equal to
the value that is currently in use. For both
InnoDB and MyISAM, if
the value is less than or equal to the maximum value currently
in the AUTO_INCREMENT column, the value is
reset to the current maximum AUTO_INCREMENT
column value plus one.
To change the default table character set:
ALTER TABLE t1 CHARACTER SET = utf8;
See also Changing the Character Set.
To add (or change) a table comment:
ALTER TABLE t1 COMMENT = 'New table comment';
Use ALTER TABLE with the
TABLESPACE option to move
InnoDB tables between existing
general
tablespaces,
file-per-table
tablespaces, and the
system
tablespace. See
Moving Tables Between Tablespaces Using ALTER TABLE.
ALTER TABLE ... TABLESPACE operations
always cause a full table rebuild, even if the
TABLESPACE attribute has not changed
from its previous value.
ALTER TABLE ... TABLESPACE syntax does
not support moving a table from a temporary tablespace to
a persistent tablespace.
The DATA DIRECTORY clause, which is
supported with
CREATE TABLE
... TABLESPACE, is not supported with
ALTER TABLE ... TABLESPACE, and is
ignored if specified.
For more information about the capabilities and
limitations of the TABLESPACE option,
see CREATE TABLE.
MySQL NDB Cluster 8.0 supports setting
NDB_TABLE options for controlling a
table's partition balance (fragment count type),
read-from-any-replica capability, full replication, or any
combination of these, as part of the table comment for an
ALTER TABLE statement in the same manner as
for CREATE TABLE, as shown in
this example:
ALTER TABLE t1 COMMENT = "NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RA_BY_NODE";
Bear in mind that ALTER TABLE ... COMMENT
... discards any existing comment for the table. See
Setting NDB_TABLE options, for
additional information and examples.
To verify that the table options were changed as intended, use
SHOW CREATE TABLE, or query the
INFORMATION_SCHEMA.TABLES table.
ALTER TABLE operations are
processed using one of the following algorithms:
COPY: Operations are performed on a copy of
the original table, and table data is copied from the original
table to the new table row by row. Concurrent DML is not
permitted.
INPLACE: Operations avoid copying table
data but may rebuild the table in place. An exclusive metadata
lock on the table may be taken briefly during preparation and
execution phases of the operation. Typically, concurrent DML
is supported.
INSTANT: Operations only modify metadata in
the data dictionary. No exclusive metadata locks are taken on
the table during preparation and execution, and table data is
unaffected, making operations instantaneous. Concurrent DML is
permitted. (Introduced in MySQL 8.0.12)
The ALGORITHM clause is optional. If the
ALGORITHM clause is omitted, MySQL uses
ALGORITHM=INSTANT for storage engines and
ALTER TABLE clauses that support
it. Otherwise, ALGORITHM=INPLACE is used. If
ALGORITHM=INPLACE is not supported,
ALGORITHM=COPY is used.
Specifying an ALGORITHM clause requires the
operation to use the specified algorithm for clauses and storage
engines that support it, or fail with an error otherwise.
Specifying ALGORITHM=DEFAULT is the same as
omitting the ALGORITHM clause.
ALTER TABLE operations that use the
COPY algorithm wait for other operations that
are modifying the table to complete. After alterations are applied
to the table copy, data is copied over, the original table is
deleted, and the table copy is renamed to the name of the original
table. While the ALTER TABLE
operation executes, the original table is readable by other
sessions (with the exception noted shortly). Updates and writes to
the table started after the ALTER
TABLE operation begins are stalled until the new table
is ready, then are automatically redirected to the new table. The
temporary copy of the table is created in the database directory
of the original table unless it is a RENAME TO
operation that moves the table to a database that resides in a
different directory.
The exception referred to earlier is that
ALTER TABLE blocks reads (not just
writes) at the point where it is ready to clear outdated table
structures from the table and table definition caches. At this
point, it must acquire an exclusive lock. To do so, it waits for
current readers to finish, and blocks new reads and writes.
An ALTER TABLE operation that uses
the COPY algorithm prevents concurrent DML
operations. Concurrent queries are still allowed. That is, a
table-copying operation always includes at least the concurrency
restrictions of LOCK=SHARED (allow queries but
not DML). You can further restrict concurrency for operations that
support the LOCK clause by specifying
LOCK=EXCLUSIVE, which prevents DML and queries.
For more information, see
Concurrency Control.
To force use of the COPY algorithm for an
ALTER TABLE operation that would
otherwise not use it, specify ALGORITHM=COPY or
enable the old_alter_table system
variable. If there is a conflict between the
old_alter_table setting and an
ALGORITHM clause with a value other than
DEFAULT, the ALGORITHM
clause takes precedence.
For InnoDB tables, an
ALTER TABLE operation that uses the
COPY algorithm on a table that resides in a
shared tablespace
can increase the amount of space used by the tablespace. Such
operations require as much additional space as the data in the
table plus indexes. For a table residing in a shared tablespace,
the additional space used during the operation is not released
back to the operating system as it is for a table that resides in
a file-per-table
tablespace.
For information about space requirements for online DDL operations, see Section 15.12.3, “Online DDL Space Requirements”.
ALTER TABLE operations that support
the INPLACE algorithm include:
ALTER TABLE operations supported by the
InnoDB
online DDL feature. See
Section 15.12.1, “Online DDL Operations”.
Renaming a table. MySQL renames files that correspond to the
table tbl_name without making a
copy. (You can also use the RENAME
TABLE statement to rename tables. See
Section 13.1.36, “RENAME TABLE Statement”.) Privileges granted
specifically for the renamed table are not migrated to the new
name. They must be changed manually.
Operations that only modify table metadata. These operations are immediate because the server does not touch table contents. Metadata-only operations include:
Renaming a column. In NDB Cluster 8.0.18 and later, this operation can also be performed online.
Changing the default value of a column (except for
NDB tables).
Modifying the definition of an
ENUM or
SET column by adding new
enumeration or set members to the end
of the list of valid member values, as long as the storage
size of the data type does not change. For example, adding
a member to a SET column
that has 8 members changes the required storage per value
from 1 byte to 2 bytes; this requires a table copy. Adding
members in the middle of the list causes renumbering of
existing members, which requires a table copy.
Changing the definition of a spatial column to remove the
SRID attribute. (Adding or changing an
SRID attribute does require a rebuild
and cannot be done in place because the server must verify
that all values have the specified SRID value.)
As of MySQL 8.0.14, changing a column character set, when these conditions apply:
As of MySQL 8.0.14, changing a generated column, when these conditions apply:
For InnoDB tables, statements that
modify generated stored columns but do not change
their type, expression, or nullability.
For non-InnoDB tables, statements
that modify generated stored or virtual columns but do
not change their type, expression, or nullability.
An example of such a change is a change to the column comment.
Renaming an index.
Adding or dropping a secondary index, for
InnoDB and
NDB tables. See
Section 15.12.1, “Online DDL Operations”.
For NDB tables, operations that
add and drop indexes on variable-width columns. These
operations occur online, without table copying and without
blocking concurrent DML actions for most of their duration.
See Section 22.5.11, “Online Operations with ALTER TABLE in NDB Cluster”.
Modifying index visibility with an ALTER
INDEX operation.
Column modifications of tables containing generated columns
that depend on columns with a DEFAULT value
if the modified columns are not involved in the generated
column expressions. For example, changing the
NULL property of a separate column can be
done in place without a table rebuild.
ALTER TABLE operations that support the
INSTANT algorithm include:
Adding a column. This feature is referred to as “Instant
ADD COLUMN”. Limitations apply. See
Section 15.12.1, “Online DDL Operations”.
Adding or dropping a virtual column.
Adding or dropping a column default value.
Modifying the definition of an
ENUM or
SET column. The same
restrictions apply as described above for
ALGORITHM=INSTANT.
Changing the index type.
Renaming a table. The same restrictions apply as described
above for ALGORITHM=INSTANT.
For more information about operations that support
ALGORITHM=INSTANT, see
Section 15.12.1, “Online DDL Operations”.
ALTER TABLE upgrades MySQL 5.5
temporal columns to 5.6 format for ADD COLUMN,
CHANGE COLUMN, MODIFY
COLUMN, ADD INDEX, and
FORCE operations. This conversion cannot be
done using the INPLACE algorithm because the
table must be rebuilt, so specifying
ALGORITHM=INPLACE in these cases results in an
error. Specify ALGORITHM=COPY if necessary.
If an ALTER TABLE operation on a multicolumn
index used to partition a table by KEY changes
the order of the columns, it can only be performed using
ALGORITHM=COPY.
The WITHOUT VALIDATION and WITH
VALIDATION clauses affect whether
ALTER TABLE performs an in-place
operation for
virtual generated
column modifications. See
Section 13.1.9.2, “ALTER TABLE and Generated Columns”.
NDB Cluster 8.0 supports online operations using the same
ALGORITHM=INPLACE syntax used with the standard
MySQL Server. NDB does not support changing a
tablespace online; beginning with NDB 8.0.21, it is disallowed.
See Section 22.5.11, “Online Operations with ALTER TABLE in NDB Cluster”, for more
information.
ALTER TABLE with DISCARD ... PARTITION
... TABLESPACE or IMPORT ... PARTITION ...
TABLESPACE does not create any temporary tables or
temporary partition files.
ALTER TABLE with ADD
PARTITION, DROP PARTITION,
COALESCE PARTITION, REBUILD
PARTITION, or REORGANIZE PARTITION
does not create temporary tables (except when used with
NDB tables); however, these
operations can and do create temporary partition files.
ADD or DROP operations for
RANGE or LIST partitions are
immediate operations or nearly so. ADD or
COALESCE operations for HASH
or KEY partitions copy data between all
partitions, unless LINEAR HASH or
LINEAR KEY was used; this is effectively the
same as creating a new table, although the ADD
or COALESCE operation is performed partition by
partition. REORGANIZE operations copy only
changed partitions and do not touch unchanged ones.
For MyISAM tables, you can speed up index
re-creation (the slowest part of the alteration process) by
setting the
myisam_sort_buffer_size system
variable to a high value.
For ALTER TABLE operations that
support it, you can use the LOCK clause to
control the level of concurrent reads and writes on a table while
it is being altered. Specifying a non-default value for this
clause enables you to require a certain amount of concurrent
access or exclusivity during the alter operation, and halts the
operation if the requested degree of locking is not available.
Only LOCK = DEFAULT is permitted for operations
that use ALGORITHM=INSTANT. The other
LOCK clause parameters are not applicable.
The parameters for the LOCK clause are:
LOCK = DEFAULT
Maximum level of concurrency for the given
ALGORITHM clause (if any) and
ALTER TABLE operation: Permit concurrent
reads and writes if supported. If not, permit concurrent reads
if supported. If not, enforce exclusive access.
LOCK = NONE
If supported, permit concurrent reads and writes. Otherwise, an error occurs.
LOCK = SHARED
If supported, permit concurrent reads but block writes. Writes
are blocked even if concurrent writes are supported by the
storage engine for the given ALGORITHM
clause (if any) and ALTER TABLE operation.
If concurrent reads are not supported, an error occurs.
LOCK = EXCLUSIVE
Enforce exclusive access. This is done even if concurrent
reads/writes are supported by the storage engine for the given
ALGORITHM clause (if any) and
ALTER TABLE operation.
Use ADD to add new columns to a table, and
DROP to remove existing columns. DROP
is a MySQL extension
to standard SQL.
col_name
To add a column at a specific position within a table row, use
FIRST or AFTER
. The default is to
add the column last.
col_name
If a table contains only one column, the column cannot be dropped.
If what you intend is to remove the table, use the
DROP TABLE statement instead.
If columns are dropped from a table, the columns are also removed
from any index of which they are a part. If all columns that make
up an index are dropped, the index is dropped as well. If you use
CHANGE or MODIFY to shorten
a column for which an index exists on the column, and the
resulting column length is less than the index length, MySQL
shortens the index automatically.
For ALTER TABLE ... ADD, if the column has an
expression default value that uses a nondeterministic function,
the statement may produce a warning or error. For further
information, see Section 11.6, “Data Type Default Values”, and
Section 17.1.3.6, “Restrictions on Replication with GTIDs”.
The CHANGE, MODIFY,
RENAME COLUMN, and ALTER
clauses enable the names and definitions of existing columns to be
altered. They have these comparative characteristics:
CHANGE:
Can rename a column and change its definition, or both.
Has more capability than MODIFY or
RENAME COLUMN, but at the expense of
convenience for some operations. CHANGE
requires naming the column twice if not renaming it, and
requires respecifying the column definition if only
renaming it.
With FIRST or AFTER,
can reorder columns.
MODIFY:
Can change a column definition but not its name.
More convenient than CHANGE to change a
column definition without renaming it.
With FIRST or AFTER,
can reorder columns.
RENAME COLUMN:
Can change a column name but not its definition.
More convenient than CHANGE to rename a
column without changing its definition.
ALTER: Used only to change a column default
value.
CHANGE is a MySQL extension to standard SQL.
MODIFY and RENAME COLUMN are
MySQL extensions for Oracle compatibility.
To alter a column to change both its name and definition, use
CHANGE, specifying the old and new names and
the new definition. For example, to rename an INT NOT
NULL column from a to
b and change its definition to use the
BIGINT data type while retaining the
NOT NULL attribute, do this:
ALTER TABLE t1 CHANGE a b BIGINT NOT NULL;
To change a column definition but not its name, use
CHANGE or MODIFY. With
CHANGE, the syntax requires two column names,
so you must specify the same name twice to leave the name
unchanged. For example, to change the definition of column
b, do this:
ALTER TABLE t1 CHANGE b b INT NOT NULL;
MODIFY is more convenient to change the
definition without changing the name because it requires the
column name only once:
ALTER TABLE t1 MODIFY b INT NOT NULL;
To change a column name but not its definition, use
CHANGE or RENAME COLUMN.
With CHANGE, the syntax requires a column
definition, so to leave the definition unchanged, you must
respecify the definition the column currently has. For example, to
rename an INT NOT NULL column from
b to a, do this:
ALTER TABLE t1 CHANGE b a INT NOT NULL;
RENAME COLUMN is more convenient to change the
name without changing the definition because it requires only the
old and new names:
ALTER TABLE t1 RENAME COLUMN b TO a;
In general, you cannot rename a column to a name that already
exists in the table. However, this is sometimes not the case, such
as when you swap names or move them through a cycle. If a table
has columns named a, b, and
c, these are valid operations:
-- swap a and b
ALTER TABLE t1 RENAME COLUMN a TO b,
RENAME COLUMN b TO a;
-- "rotate" a, b, c through a cycle
ALTER TABLE t1 RENAME COLUMN a TO b,
RENAME COLUMN b TO c,
RENAME COLUMN c TO a;
For column definition changes using CHANGE or
MODIFY, the definition must include the data
type and all attributes that should apply to the new column, other
than index attributes such as PRIMARY KEY or
UNIQUE. Attributes present in the original
definition but not specified for the new definition are not
carried forward. Suppose that a column col1 is
defined as INT UNSIGNED DEFAULT 1 COMMENT 'my
column' and you modify the column as follows, intending
to change only INT to
BIGINT:
ALTER TABLE t1 MODIFY col1 BIGINT;
That statement changes the data type from INT
to BIGINT, but it also drops the
UNSIGNED, DEFAULT, and
COMMENT attributes. To retain them, the
statement must include them explicitly:
ALTER TABLE t1 MODIFY col1 BIGINT UNSIGNED DEFAULT 1 COMMENT 'my column';
For data type changes using CHANGE or
MODIFY, MySQL tries to convert existing column
values to the new type as well as possible.
This conversion may result in alteration of data. For example,
if you shorten a string column, values may be truncated. To
prevent the operation from succeeding if conversions to the new
data type would result in loss of data, enable strict SQL mode
before using ALTER TABLE (see
Section 5.1.11, “Server SQL Modes”).
If you use CHANGE or MODIFY
to shorten a column for which an index exists on the column, and
the resulting column length is less than the index length, MySQL
shortens the index automatically.
For columns renamed by CHANGE or
RENAME COLUMN, MySQL automatically renames
these references to the renamed column:
Indexes that refer to the old column, including invisible
indexes and disabled MyISAM indexes.
Foreign keys that refer to the old column.
For columns renamed by CHANGE or
RENAME COLUMN, MySQL does not automatically
rename these references to the renamed column:
Generated column and partition expressions that refer to the
renamed column. You must use CHANGE to
redefine such expressions in the same
ALTER TABLE statement as the
one that renames the column.
Views and stored programs that refer to the renamed column. You must manually alter the definition of these objects to refer to the new column name.
To reorder columns within a table, use FIRST
and AFTER in CHANGE or
MODIFY operations.
ALTER ... SET DEFAULT or ALTER ...
DROP DEFAULT specify a new default value for a column or
remove the old default value, respectively. If the old default is
removed and the column can be NULL, the new
default is NULL. If the column cannot be
NULL, MySQL assigns a default value as
described in Section 11.6, “Data Type Default Values”.
DROP PRIMARY KEY drops the
primary key. If there is
no primary key, an error occurs. For information about the
performance characteristics of primary keys, especially for
InnoDB tables, see
Section 8.3.2, “Primary Key Optimization”.
If the sql_require_primary_key
system variable is enabled, attempting to drop a primary key
produces an error.
If you add a UNIQUE INDEX or PRIMARY
KEY to a table, MySQL stores it before any nonunique
index to permit detection of duplicate keys as early as possible.
DROP INDEX removes an index. This
is a MySQL extension to standard SQL. See
Section 13.1.27, “DROP INDEX Statement”. To determine index names, use
SHOW INDEX FROM
.
tbl_name
Some storage engines permit you to specify an index type when
creating an index. The syntax for the
index_type specifier is USING
. For details about
type_nameUSING, see Section 13.1.15, “CREATE INDEX Statement”. The
preferred position is after the column list. Support for use of
the option before the column list will be removed in a future
MySQL release.
index_option values specify additional
options for an index. USING is one such option.
For details about permissible
index_option values, see
Section 13.1.15, “CREATE INDEX Statement”.
RENAME INDEX renames an
index. This is a MySQL extension to standard SQL. The content of
the table remains unchanged.
old_index_name TO
new_index_nameold_index_name must be the name of an
existing index in the table that is not dropped by the same
ALTER TABLE statement.
new_index_name is the new index name,
which cannot duplicate the name of an index in the resulting table
after changes have been applied. Neither index name can be
PRIMARY.
If you use ALTER TABLE on a
MyISAM table, all nonunique indexes are created
in a separate batch (as for REPAIR
TABLE). This should make ALTER
TABLE much faster when you have many indexes.
For MyISAM tables, key updating can be
controlled explicitly. Use ALTER TABLE ... DISABLE
KEYS to tell MySQL to stop updating nonunique indexes.
Then use ALTER TABLE ... ENABLE KEYS to
re-create missing indexes. MyISAM does this
with a special algorithm that is much faster than inserting keys
one by one, so disabling keys before performing bulk insert
operations should give a considerable speedup. Using
ALTER TABLE ... DISABLE KEYS requires the
INDEX privilege in addition to the
privileges mentioned earlier.
While the nonunique indexes are disabled, they are ignored for
statements such as SELECT and
EXPLAIN that otherwise would use
them.
After an ALTER TABLE statement, it
may be necessary to run ANALYZE
TABLE to update index cardinality information. See
Section 13.7.7.22, “SHOW INDEX Statement”.
The ALTER INDEX operation permits an index to
be made visible or invisible. An invisible index is not used by
the optimizer. Modification of index visibility applies to indexes
other than primary keys (either explicit or implicit). This
feature is storage engine neutral (supported for any engine). For
more information, see Section 8.3.12, “Invisible Indexes”.
The FOREIGN KEY and
REFERENCES clauses are supported by the
InnoDB and NDB storage
engines, which implement ADD [CONSTRAINT
[. See Section 13.1.20.5, “FOREIGN KEY Constraints”.
For other storage engines, the clauses are parsed but ignored.
symbol]] FOREIGN KEY
[index_name] (...) REFERENCES ...
(...)
For ALTER TABLE, unlike
CREATE TABLE, ADD FOREIGN
KEY ignores index_name if
given and uses an automatically generated foreign key name. As a
workaround, include the CONSTRAINT clause to
specify the foreign key name:
ADD CONSTRAINT name FOREIGN KEY (....) ...
MySQL silently ignores inline REFERENCES
specifications, where the references are defined as part of the
column specification. MySQL accepts only
REFERENCES clauses defined as part of a
separate FOREIGN KEY specification.
Partitioned InnoDB tables do not support
foreign keys. This restriction does not apply to
NDB tables, including those explicitly
partitioned by [LINEAR] KEY. For more
information, see
Section 23.6.2, “Partitioning Limitations Relating to Storage Engines”.
MySQL Server and NDB Cluster both support the use of
ALTER TABLE to drop foreign keys:
ALTER TABLEtbl_nameDROP FOREIGN KEYfk_symbol;
Adding and dropping a foreign key in the same
ALTER TABLE statement is supported
for ALTER TABLE ...
ALGORITHM=INPLACE but not for
ALTER TABLE ...
ALGORITHM=COPY.
The server prohibits changes to foreign key columns that have the
potential to cause loss of referential integrity. A workaround is
to use ALTER TABLE
... DROP FOREIGN KEY before changing the column
definition and ALTER
TABLE ... ADD FOREIGN KEY afterward. Examples of
prohibited changes include:
Changes to the data type of foreign key columns that may be
unsafe. For example, changing
VARCHAR(20) to
VARCHAR(30) is permitted, but
changing it to VARCHAR(1024) is
not because that alters the number of length bytes required to
store individual values.
Changing a NULL column to NOT
NULL in non-strict mode is prohibited to prevent
converting NULL values to default
non-NULL values, for which there are no
corresponding values in the referenced table. The operation is
permitted in strict mode, but an error is returned if any such
conversion is required.
ALTER TABLE changes
internally generated foreign key constraint names and user-defined
foreign key constraint names that begin with the string
“tbl_name RENAME
new_tbl_nametbl_name_ibfk_” to
reflect the new table name. InnoDB interprets
foreign key constraint names that begin with the string
“tbl_name_ibfk_” as
internally generated names.
Prior to MySQL 8.0.16, ALTER TABLE
permits only the following limited version of
CHECK constraint-adding syntax, which is parsed
and ignored:
ADD CHECK (expr)
As of MySQL 8.0.16, ALTER TABLE
permits CHECK constraints for existing tables
to be added, dropped, or altered:
Add a new CHECK constraint:
ALTER TABLEtbl_nameADD CONSTRAINT [symbol] CHECK (expr) [[NOT] ENFORCED];
The meaning of constraint syntax elements is the same as for
CREATE TABLE. See
Section 13.1.20.6, “CHECK Constraints”.
Drop an existing CHECK constraint named
symbol:
ALTER TABLEtbl_nameDROP CHECKsymbol;
Alter whether an existing CHECK constraint
named symbol is enforced:
ALTER TABLEtbl_nameALTER CHECKsymbol[NOT] ENFORCED;
The DROP CHECK and ALTER
CHECK clauses are MySQL extensions to standard SQL.
As of MySQL 8.0.19, ALTER TABLE
permits more general (and SQL standard) syntax for dropping and
altering existing constraints of any type, where the constraint
type is determined from the constraint name:
Drop an existing constraint named
symbol:
ALTER TABLEtbl_nameDROP CONSTRAINTsymbol;
If the
sql_require_primary_key
system variable is enabled, attempting to drop a primary key
produces an error.
Alter whether an existing constraint named
symbol is enforced:
ALTER TABLEtbl_nameALTER CONSTRAINTsymbol[NOT] ENFORCED;
Only CHECK constraints can be altered to be
unenforced. All other constraint types are always enforced.
The SQL standard specifies that all types of constraints (primary
key, unique index, foreign key, check) belong to the same
namespace. In MySQL, each constraint type has its own namespace
per schema. Consequently, names for each type of constraint must
be unique per schema, but constraints of different types can have
the same name. When multiple constraints have the same name,
DROP CONSTRAINT and ADD
CONSTRAINT are ambiguous and an error occurs. In such
cases, constraint-specific syntax must be used to modify the
constraint. For example, use DROP PRIMARY KEY
or DROP FOREIGN KEY to drop a primary key or foreign key.
If a table alteration causes a violation of an enforced
CHECK constraint, an error occurs and the table
is not modified. Examples of operations for which an error occurs:
Attempts to add the AUTO_INCREMENT
attribute to a column that is used in a
CHECK constraint.
Attempts to add an enforced CHECK
constraint or enforce a nonenforced CHECK
constraint for which existing rows violate the constraint
condition.
Attempts to modify, rename, or drop a column that is used in a
CHECK constraint, unless that constraint is
also dropped in the same statement. Exception: If a
CHECK constraint refers only to a single
column, dropping the column automatically drops the
constraint.
ALTER TABLE changes
internally generated and user-defined tbl_name RENAME
new_tbl_nameCHECK
constraint names that begin with the string
“tbl_name_chk_” to reflect
the new table name. MySQL interprets CHECK
constraint names that begin with the string
“tbl_name_chk_” as
internally generated names.
To change the table default character set and all character
columns (CHAR,
VARCHAR,
TEXT) to a new character set, use a
statement like this:
ALTER TABLEtbl_nameCONVERT TO CHARACTER SETcharset_name;
The statement also changes the collation of all character columns.
If you specify no COLLATE clause to indicate
which collation to use, the statement uses default collation for
the character set. If this collation is inappropriate for the
intended table use (for example, if it would change from a
case-sensitive collation to a case-insensitive collation), specify
a collation explicitly.
For a column that has a data type of
VARCHAR or one of the
TEXT types, CONVERT TO
CHARACTER SET changes the data type as necessary to
ensure that the new column is long enough to store as many
characters as the original column. For example, a
TEXT column has two length bytes,
which store the byte-length of values in the column, up to a
maximum of 65,535. For a latin1
TEXT column, each character
requires a single byte, so the column can store up to 65,535
characters. If the column is converted to utf8,
each character might require up to three bytes, for a maximum
possible length of 3 × 65,535 = 196,605 bytes. That length
does not fit in a TEXT column's
length bytes, so MySQL converts the data type to
MEDIUMTEXT, which is the smallest
string type for which the length bytes can record a value of
196,605. Similarly, a VARCHAR
column might be converted to
MEDIUMTEXT.
To avoid data type changes of the type just described, do not use
CONVERT TO CHARACTER SET. Instead, use
MODIFY to change individual columns. For
example:
ALTER TABLE t MODIFY latin1_text_col TEXT CHARACTER SET utf8;
ALTER TABLE t MODIFY latin1_varchar_col VARCHAR(M) CHARACTER SET utf8;
If you specify CONVERT TO CHARACTER SET binary,
the CHAR,
VARCHAR, and
TEXT columns are converted to their
corresponding binary string types
(BINARY,
VARBINARY,
BLOB). This means that the columns
no longer will have a character set and a subsequent
CONVERT TO operation will not apply to them.
If charset_name is
DEFAULT in a CONVERT TO CHARACTER
SET operation, the character set named by the
character_set_database system
variable is used.
The CONVERT TO operation converts column
values between the original and named character sets. This is
not what you want if you have a column in
one character set (like latin1) but the
stored values actually use some other, incompatible character
set (like utf8). In this case, you have to do
the following for each such column:
ALTER TABLE t1 CHANGE c1 c1 BLOB; ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The reason this works is that there is no conversion when you
convert to or from BLOB columns.
To change only the default character set for a table, use this statement:
ALTER TABLEtbl_nameDEFAULT CHARACTER SETcharset_name;
The word DEFAULT is optional. The default
character set is the character set that is used if you do not
specify the character set for columns that you add to a table
later (for example, with ALTER TABLE ... ADD
column).
When the foreign_key_checks
system variable is enabled, which is the default setting,
character set conversion is not permitted on tables that include a
character string column used in a foreign key constraint. The
workaround is to disable
foreign_key_checks before
performing the character set conversion. You must perform the
conversion on both tables involved in the foreign key constraint
before re-enabling
foreign_key_checks. If you
re-enable foreign_key_checks
after converting only one of the tables, an ON DELETE
CASCADE or ON UPDATE CASCADE
operation could corrupt data in the referencing table due to
implicit conversion that occurs during these operations (Bug
#45290, Bug #74816).
An InnoDB table created in its own
file-per-table
tablespace can be imported from a backup or from another MySQL
server instance using DISCARD TABLEPACE and
IMPORT TABLESPACE clauses. See
Section 15.6.1.3, “Importing InnoDB Tables”.
ORDER BY enables you to create the new table
with the rows in a specific order. This option is useful primarily
when you know that you query the rows in a certain order most of
the time. By using this option after major changes to the table,
you might be able to get higher performance. In some cases, it
might make sorting easier for MySQL if the table is in order by
the column that you want to order it by later.
The table does not remain in the specified order after inserts and deletes.
ORDER BY syntax permits one or more column
names to be specified for sorting, each of which optionally can be
followed by ASC or DESC to
indicate ascending or descending sort order, respectively. The
default is ascending order. Only column names are permitted as
sort criteria; arbitrary expressions are not permitted. This
clause should be given last after any other clauses.
ORDER BY does not make sense for
InnoDB tables because InnoDB
always orders table rows according to the
clustered index.
When used on a partitioned table, ALTER TABLE ... ORDER
BY orders rows within each partition only.
partition_options signifies options
that can be used with partitioned tables for repartitioning, to
add, drop, discard, import, merge, and split partitions, and to
perform partitioning maintenance.
It is possible for an ALTER TABLE
statement to contain a PARTITION BY or
REMOVE PARTITIONING clause in an addition to
other alter specifications, but the PARTITION
BY or REMOVE PARTITIONING clause must
be specified last after any other specifications. The ADD
PARTITION, DROP PARTITION,
DISCARD PARTITION, IMPORT
PARTITION, COALESCE PARTITION,
REORGANIZE PARTITION, EXCHANGE
PARTITION, ANALYZE PARTITION,
CHECK PARTITION, and REPAIR
PARTITION options cannot be combined with other alter
specifications in a single ALTER TABLE, since
the options just listed act on individual partitions.
For more information about partition options, see
Section 13.1.20, “CREATE TABLE Statement”, and
Section 13.1.9.1, “ALTER TABLE Partition Operations”. For
information about and examples of ALTER TABLE ...
EXCHANGE PARTITION statements, see
Section 23.3.3, “Exchanging Partitions and Subpartitions with Tables”.
Partitioning-related clauses for ALTER
TABLE can be used with partitioned tables for
repartitioning, to add, drop, discard, import, merge, and split
partitions, and to perform partitioning maintenance.
Simply using a partition_options
clause with ALTER TABLE on a
partitioned table repartitions the table according to the
partitioning scheme defined by the
partition_options. This clause
always begins with PARTITION BY, and
follows the same syntax and other rules as apply to the
partition_options clause for
CREATE TABLE (for more
detailed information, see Section 13.1.20, “CREATE TABLE Statement”),
and can also be used to partition an existing table that is
not already partitioned. For example, consider a
(nonpartitioned) table defined as shown here:
CREATE TABLE t1 (
id INT,
year_col INT
);
This table can be partitioned by HASH,
using the id column as the partitioning
key, into 8 partitions by means of this statement:
ALTER TABLE t1
PARTITION BY HASH(id)
PARTITIONS 8;
MySQL supports an ALGORITHM option with
[SUB]PARTITION BY [LINEAR] KEY.
ALGORITHM=1 causes the server to use the
same key-hashing functions as MySQL 5.1 when computing the
placement of rows in partitions;
ALGORITHM=2 means that the server employs
the key-hashing functions implemented and used by default
for new KEY partitioned tables in MySQL
5.5 and later. (Partitioned tables created with the
key-hashing functions employed in MySQL 5.5 and later cannot
be used by a MySQL 5.1 server.) Not specifying the option
has the same effect as using ALGORITHM=2.
This option is intended for use chiefly when upgrading or
downgrading [LINEAR] KEY partitioned
tables between MySQL 5.1 and later MySQL versions, or for
creating tables partitioned by KEY or
LINEAR KEY on a MySQL 5.5 or later server
which can be used on a MySQL 5.1 server.
The table that results from using an ALTER TABLE
... PARTITION BY statement must follow the same
rules as one created using CREATE TABLE ...
PARTITION BY. This includes the rules governing
the relationship between any unique keys (including any
primary key) that the table might have, and the column or
columns used in the partitioning expression, as discussed in
Section 23.6.1, “Partitioning Keys, Primary Keys, and Unique Keys”.
The CREATE TABLE ... PARTITION BY rules
for specifying the number of partitions also apply to
ALTER TABLE ... PARTITION BY.
The partition_definition clause
for ALTER TABLE ADD PARTITION supports
the same options as the clause of the same name for the
CREATE TABLE statement. (See
Section 13.1.20, “CREATE TABLE Statement”, for the syntax and
description.) Suppose that you have the partitioned table
created as shown here:
CREATE TABLE t1 (
id INT,
year_col INT
)
PARTITION BY RANGE (year_col) (
PARTITION p0 VALUES LESS THAN (1991),
PARTITION p1 VALUES LESS THAN (1995),
PARTITION p2 VALUES LESS THAN (1999)
);
You can add a new partition p3 to this
table for storing values less than 2002
as follows:
ALTER TABLE t1 ADD PARTITION (PARTITION p3 VALUES LESS THAN (2002));
DROP PARTITION can be used to drop one or
more RANGE or LIST
partitions. This statement cannot be used with
HASH or KEY
partitions; instead, use COALESCE
PARTITION (see later in this section). Any data
that was stored in the dropped partitions named in the
partition_names list is
discarded. For example, given the table
t1 defined previously, you can drop the
partitions named p0 and
p1 as shown here:
ALTER TABLE t1 DROP PARTITION p0, p1;
DROP PARTITION does not work with
tables that use the NDB
storage engine. See
Section 23.3.1, “Management of RANGE and LIST Partitions”, and
Section 22.1.7, “Known Limitations of NDB Cluster”.
ADD PARTITION and DROP
PARTITION do not currently support IF
[NOT] EXISTS.
The DISCARD
PARTITION ... TABLESPACE and
IMPORT
PARTITION ... TABLESPACE options extend the
Transportable
Tablespace feature to individual
InnoDB table partitions. Each
InnoDB table partition has its own
tablespace file (.ibd file). The
Transportable
Tablespace feature makes it easy to copy the
tablespaces from a running MySQL server instance to another
running instance, or to perform a restore on the same
instance. Both options take a comma-separated list of one or
more partition names. For example:
ALTER TABLE t1 DISCARD PARTITION p2, p3 TABLESPACE;
ALTER TABLE t1 IMPORT PARTITION p2, p3 TABLESPACE;
When running
DISCARD
PARTITION ... TABLESPACE and
IMPORT
PARTITION ... TABLESPACE on subpartitioned tables,
both partition and subpartition names are allowed. When a
partition name is specified, subpartitions of that partition
are included.
The
Transportable
Tablespace feature also supports copying or restoring
partitioned InnoDB tables. For more
information, see Section 15.6.1.3, “Importing InnoDB Tables”.
Renames of partitioned tables are supported. You can rename
individual partitions indirectly using ALTER TABLE
... REORGANIZE PARTITION; however, this operation
copies the partition's data.
To delete rows from selected partitions, use the
TRUNCATE PARTITION option. This option
takes a list of one or more comma-separated partition names.
Consider the table t1 created by this
statement:
CREATE TABLE t1 (
id INT,
year_col INT
)
PARTITION BY RANGE (year_col) (
PARTITION p0 VALUES LESS THAN (1991),
PARTITION p1 VALUES LESS THAN (1995),
PARTITION p2 VALUES LESS THAN (1999),
PARTITION p3 VALUES LESS THAN (2003),
PARTITION p4 VALUES LESS THAN (2007)
);
To delete all rows from partition p0, use
the following statement:
ALTER TABLE t1 TRUNCATE PARTITION p0;
The statement just shown has the same effect as the
following DELETE statement:
DELETE FROM t1 WHERE year_col < 1991;
When truncating multiple partitions, the partitions do not
have to be contiguous: This can greatly simplify delete
operations on partitioned tables that would otherwise
require very complex WHERE conditions if
done with DELETE statements.
For example, this statement deletes all rows from partitions
p1 and p3:
ALTER TABLE t1 TRUNCATE PARTITION p1, p3;
An equivalent DELETE
statement is shown here:
DELETE FROM t1 WHERE
(year_col >= 1991 AND year_col < 1995)
OR
(year_col >= 2003 AND year_col < 2007);
If you use the ALL keyword in place of
the list of partition names, the statement acts on all table
partitions.
TRUNCATE PARTITION merely deletes rows;
it does not alter the definition of the table itself, or of
any of its partitions.
To verify that the rows were dropped, check the
INFORMATION_SCHEMA.PARTITIONS table,
using a query such as this one:
SELECT PARTITION_NAME, TABLE_ROWS
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_NAME = 't1';
COALESCE PARTITION can be used with a
table that is partitioned by HASH or
KEY to reduce the number of partitions by
number. Suppose that you have
created table t2 as follows:
CREATE TABLE t2 (
name VARCHAR (30),
started DATE
)
PARTITION BY HASH( YEAR(started) )
PARTITIONS 6;
To reduce the number of partitions used by
t2 from 6 to 4, use the following
statement:
ALTER TABLE t2 COALESCE PARTITION 2;
The data contained in the last
number partitions will be merged
into the remaining partitions. In this case, partitions 4
and 5 will be merged into the first 4 partitions (the
partitions numbered 0, 1, 2, and 3).
To change some but not all the partitions used by a
partitioned table, you can use REORGANIZE
PARTITION. This statement can be used in several
ways:
To merge a set of partitions into a single partition.
This is done by naming several partitions in the
partition_names list and
supplying a single definition for
partition_definition.
To split an existing partition into several partitions.
Accomplish this by naming a single partition for
partition_names and providing
multiple
partition_definitions.
To change the ranges for a subset of partitions defined
using VALUES LESS THAN or the value
lists for a subset of partitions defined using
VALUES IN.
For partitions that have not been explicitly named, MySQL
automatically provides the default names
p0, p1,
p2, and so on. The same is true with
regard to subpartitions.
For more detailed information about and examples of
ALTER TABLE ... REORGANIZE PARTITION
statements, see
Section 23.3.1, “Management of RANGE and LIST Partitions”.
To exchange a table partition or subpartition with a table,
use the ALTER
TABLE ... EXCHANGE PARTITION statement—that
is, to move any existing rows in the partition or
subpartition to the nonpartitioned table, and any existing
rows in the nonpartitioned table to the table partition or
subpartition.
For usage information and examples, see Section 23.3.3, “Exchanging Partitions and Subpartitions with Tables”.
Several options provide partition maintenance and repair
functionality analogous to that implemented for
nonpartitioned tables by statements such as
CHECK TABLE and
REPAIR TABLE (which are also
supported for partitioned tables; for more information, see
Section 13.7.3, “Table Maintenance Statements”). These
include ANALYZE PARTITION, CHECK
PARTITION, OPTIMIZE PARTITION,
REBUILD PARTITION, and REPAIR
PARTITION. Each of these options takes a
partition_names clause consisting
of one or more names of partitions, separated by commas. The
partitions must already exist in the target table. You can
also use the ALL keyword in place of
partition_names, in which case
the statement acts on all table partitions. For more
information and examples, see
Section 23.3.4, “Maintenance of Partitions”.
InnoDB does not currently
support per-partition optimization; ALTER TABLE ...
OPTIMIZE PARTITION causes the entire table to
rebuilt and analyzed, and an appropriate warning to be
issued. (Bug #11751825, Bug #42822) To work around this
problem, use ALTER TABLE ... REBUILD
PARTITION and ALTER TABLE ... ANALYZE
PARTITION instead.
The ANALYZE PARTITION, CHECK
PARTITION, OPTIMIZE PARTITION,
and REPAIR PARTITION options are not
supported for tables which are not partitioned.
REMOVE PARTITIONING enables you to remove
a table's partitioning without otherwise affecting the
table or its data. This option can be combined with other
ALTER TABLE options such as
those used to add, drop, or rename columns or indexes.
Using the ENGINE option with
ALTER TABLE changes the
storage engine used by the table without affecting the
partitioning. The target storage engine must provide its own
partitioning handler. Only the InnoDB and
NDB storage engines have native
partitioning handlers; NDB is not
currently supported in MySQL 8.0.
It is possible for an ALTER TABLE
statement to contain a PARTITION BY or
REMOVE PARTITIONING clause in an addition to
other alter specifications, but the PARTITION
BY or REMOVE PARTITIONING clause
must be specified last after any other specifications.
The ADD PARTITION, DROP
PARTITION, COALESCE PARTITION,
REORGANIZE PARTITION, ANALYZE
PARTITION, CHECK PARTITION, and
REPAIR PARTITION options cannot be combined
with other alter specifications in a single ALTER
TABLE, since the options just listed act on individual
partitions. For more information, see
Section 13.1.9.1, “ALTER TABLE Partition Operations”.
Only a single instance of any one of the following options can
be used in a given ALTER TABLE
statement: PARTITION BY, ADD
PARTITION, DROP PARTITION,
TRUNCATE PARTITION, EXCHANGE
PARTITION, REORGANIZE PARTITION, or
COALESCE PARTITION, ANALYZE
PARTITION, CHECK PARTITION,
OPTIMIZE PARTITION, REBUILD
PARTITION, REMOVE PARTITIONING.
For example, the following two statements are invalid:
ALTER TABLE t1 ANALYZE PARTITION p1, ANALYZE PARTITION p2; ALTER TABLE t1 ANALYZE PARTITION p1, CHECK PARTITION p2;
In the first case, you can analyze partitions
p1 and p2 of table
t1 concurrently using a single statement with
a single ANALYZE PARTITION option that lists
both of the partitions to be analyzed, like this:
ALTER TABLE t1 ANALYZE PARTITION p1, p2;
In the second case, it is not possible to perform
ANALYZE and CHECK
operations on different partitions of the same table
concurrently. Instead, you must issue two separate statements,
like this:
ALTER TABLE t1 ANALYZE PARTITION p1; ALTER TABLE t1 CHECK PARTITION p2;
REBUILD operations are currently unsupported
for subpartitions. The REBUILD keyword is
expressly disallowed with subpartitions, and causes
ALTER TABLE to fail with an error if so used.
CHECK PARTITION and REPAIR
PARTITION operations fail when the partition to be
checked or repaired contains any duplicate key errors.
For more information about these statements, see Section 23.3.4, “Maintenance of Partitions”.
ALTER TABLE operations permitted for
generated columns are ADD,
MODIFY, and CHANGE.
Generated columns can be added.
CREATE TABLE t1 (c1 INT); ALTER TABLE t1 ADD COLUMN c2 INT GENERATED ALWAYS AS (c1 + 1) STORED;
The data type and expression of generated columns can be modified.
CREATE TABLE t1 (c1 INT, c2 INT GENERATED ALWAYS AS (c1 + 1) STORED); ALTER TABLE t1 MODIFY COLUMN c2 TINYINT GENERATED ALWAYS AS (c1 + 5) STORED;
Generated columns can be renamed or dropped, if no other column refers to them.
CREATE TABLE t1 (c1 INT, c2 INT GENERATED ALWAYS AS (c1 + 1) STORED); ALTER TABLE t1 CHANGE c2 c3 INT GENERATED ALWAYS AS (c1 + 1) STORED; ALTER TABLE t1 DROP COLUMN c3;
Virtual generated columns cannot be altered to stored generated columns, or vice versa. To work around this, drop the column, then add it with the new definition.
CREATE TABLE t1 (c1 INT, c2 INT GENERATED ALWAYS AS (c1 + 1) VIRTUAL); ALTER TABLE t1 DROP COLUMN c2; ALTER TABLE t1 ADD COLUMN c2 INT GENERATED ALWAYS AS (c1 + 1) STORED;
Nongenerated columns can be altered to stored but not virtual generated columns.
CREATE TABLE t1 (c1 INT, c2 INT); ALTER TABLE t1 MODIFY COLUMN c2 INT GENERATED ALWAYS AS (c1 + 1) STORED;
Stored but not virtual generated columns can be altered to nongenerated columns. The stored generated values become the values of the nongenerated column.
CREATE TABLE t1 (c1 INT, c2 INT GENERATED ALWAYS AS (c1 + 1) STORED); ALTER TABLE t1 MODIFY COLUMN c2 INT;
ADD COLUMN is not an in-place operation
for stored columns (done without using a temporary table)
because the expression must be evaluated by the server. For
stored columns, indexing changes are done in place, and
expression changes are not done in place. Changes to column
comments are done in place.
For non-partitioned tables, ADD COLUMN
and DROP COLUMN are in-place operations
for virtual columns. However, adding or dropping a virtual
column cannot be performed in place in combination with
other ALTER TABLE operations.
For partitioned tables, ADD COLUMN and
DROP COLUMN are not in-place operations
for virtual columns.
InnoDB supports secondary indexes on
virtual generated columns. Adding or dropping a secondary
index on a virtual generated column is an in-place
operation. For more information, see
Section 13.1.20.9, “Secondary Indexes and Generated Columns”.
When a VIRTUAL generated column is added
to a table or modified, it is not ensured that data being
calculated by the generated column expression will not be
out of range for the column. This can lead to inconsistent
data being returned and unexpectedly failed statements. To
permit control over whether validation occurs for such
columns, ALTER TABLE supports
WITHOUT VALIDATION and WITH
VALIDATION clauses:
With WITHOUT VALIDATION (the default
if neither clause is specified), an in-place operation
is performed (if possible), data integrity is not
checked, and the statement finishes more quickly.
However, later reads from the table might report
warnings or errors for the column if values are out of
range.
With WITH VALIDATION, ALTER
TABLE copies the table. If an out-of-range or
any other error occurs, the statement fails. Because a
table copy is performed, the statement takes longer.
WITHOUT VALIDATION and WITH
VALIDATION are permitted only with ADD
COLUMN, CHANGE COLUMN, and
MODIFY COLUMN operations. Otherwise, an
ER_WRONG_USAGE error occurs.
If expression evaluation causes truncation or provides
incorrect input to a function, the
ALTER TABLE statement
terminates with an error and the DDL operation is rejected.
An ALTER TABLE statement that
changes the default value of a column
col_name may also change the
value of a generated column expression that refers to the
column using col_name, which may
change the value of a generated column expression that
refers to the column using
DEFAULT(.
For this reason, col_name)ALTER TABLE
operations that change the definition of a column cause a
table rebuild if any generated column expression uses
DEFAULT().
Begin with a table t1 created as shown here:
CREATE TABLE t1 (a INTEGER, b CHAR(10));
To rename the table from t1 to
t2:
ALTER TABLE t1 RENAME t2;
To change column a from
INTEGER to TINYINT NOT
NULL (leaving the name the same), and to change column
b from CHAR(10) to
CHAR(20) as well as renaming it from
b to c:
ALTER TABLE t2 MODIFY a TINYINT NOT NULL, CHANGE b c CHAR(20);
To add a new TIMESTAMP column
named d:
ALTER TABLE t2 ADD d TIMESTAMP;
To add an index on column d and a
UNIQUE index on column a:
ALTER TABLE t2 ADD INDEX (d), ADD UNIQUE (a);
To remove column c:
ALTER TABLE t2 DROP COLUMN c;
To add a new AUTO_INCREMENT integer column
named c:
ALTER TABLE t2 ADD c INT UNSIGNED NOT NULL AUTO_INCREMENT, ADD PRIMARY KEY (c);
We indexed c (as a PRIMARY
KEY) because AUTO_INCREMENT columns
must be indexed, and we declare c as
NOT NULL because primary key columns cannot
be NULL.
For NDB tables, it is also possible
to change the storage type used for a table or column. For
example, consider an NDB table
created as shown here:
mysql> CREATE TABLE t1 (c1 INT) TABLESPACE ts_1 ENGINE NDB;
Query OK, 0 rows affected (1.27 sec)
To convert this table to disk-based storage, you can use the
following ALTER TABLE statement:
mysql>ALTER TABLE t1 TABLESPACE ts_1 STORAGE DISK;Query OK, 0 rows affected (2.99 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE t1\G*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `c1` int(11) DEFAULT NULL ) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.01 sec)
It is not necessary that the tablespace was referenced when the
table was originally created; however, the tablespace must be
referenced by the ALTER TABLE:
mysql>CREATE TABLE t2 (c1 INT) ts_1 ENGINE NDB;Query OK, 0 rows affected (1.00 sec) mysql>ALTER TABLE t2 STORAGE DISK;ERROR 1005 (HY000): Can't create table 'c.#sql-1750_3' (errno: 140) mysql>ALTER TABLE t2 TABLESPACE ts_1 STORAGE DISK;Query OK, 0 rows affected (3.42 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE t2\G*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t2` ( `c1` int(11) DEFAULT NULL ) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.01 sec)
To change the storage type of an individual column, you can use
ALTER TABLE ... MODIFY [COLUMN]. For example,
suppose you create an NDB Cluster Disk Data table with two
columns, using this CREATE TABLE
statement:
mysql>CREATE TABLE t3 (c1 INT, c2 INT)->TABLESPACE ts_1 STORAGE DISK ENGINE NDB;Query OK, 0 rows affected (1.34 sec)
To change column c2 from disk-based to
in-memory storage, include a STORAGE MEMORY clause in the column
definition used by the ALTER TABLE statement, as shown here:
mysql> ALTER TABLE t3 MODIFY c2 INT STORAGE MEMORY;
Query OK, 0 rows affected (3.14 sec)
Records: 0 Duplicates: 0 Warnings: 0
You can make an in-memory column into a disk-based column by
using STORAGE DISK in a similar fashion.
Column c1 uses disk-based storage, since this
is the default for the table (determined by the table-level
STORAGE DISK clause in the
CREATE TABLE statement). However,
column c2 uses in-memory storage, as can be
seen here in the output of SHOW CREATE
TABLE:
mysql> SHOW CREATE TABLE t3\G
*************************** 1. row ***************************
Table: t3
Create Table: CREATE TABLE `t3` (
`c1` int(11) DEFAULT NULL,
`c2` int(11) /*!50120 STORAGE MEMORY */ DEFAULT NULL
) /*!50100 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.02 sec)
When you add an AUTO_INCREMENT column, column
values are filled in with sequence numbers automatically. For
MyISAM tables, you can set the first sequence
number by executing SET
INSERT_ID= before
valueALTER TABLE or by using the
AUTO_INCREMENT=
table option.
value
With MyISAM tables, if you do not change the
AUTO_INCREMENT column, the sequence number is
not affected. If you drop an AUTO_INCREMENT
column and then add another AUTO_INCREMENT
column, the numbers are resequenced beginning with 1.
When replication is used, adding an
AUTO_INCREMENT column to a table might not
produce the same ordering of the rows on the slave and the
master. This occurs because the order in which the rows are
numbered depends on the specific storage engine used for the
table and the order in which the rows were inserted. If it is
important to have the same order on the master and slave, the
rows must be ordered before assigning an
AUTO_INCREMENT number. Assuming that you want
to add an AUTO_INCREMENT column to the table
t1, the following statements produce a new
table t2 identical to t1
but with an AUTO_INCREMENT column:
CREATE TABLE t2 (id INT AUTO_INCREMENT PRIMARY KEY) SELECT * FROM t1 ORDER BY col1, col2;
This assumes that the table t1 has columns
col1 and col2.
This set of statements will also produce a new table
t2 identical to t1, with
the addition of an AUTO_INCREMENT column:
CREATE TABLE t2 LIKE t1; ALTER TABLE t2 ADD id INT AUTO_INCREMENT PRIMARY KEY; INSERT INTO t2 SELECT * FROM t1 ORDER BY col1, col2;
To guarantee the same ordering on both master and slave,
all columns of t1 must
be referenced in the ORDER BY clause.
Regardless of the method used to create and populate the copy
having the AUTO_INCREMENT column, the final
step is to drop the original table and then rename the copy:
DROP TABLE t1; ALTER TABLE t2 RENAME t1;
ALTER [UNDO] TABLESPACEtablespace_nameNDB only: {ADD | DROP} DATAFILE 'file_name' [INITIAL_SIZE [=] size] [WAIT] InnoDB and NDB: [RENAME TOtablespace_name] InnoDB only: [SET {ACTIVE | INACTIVE}] [ENCRYPTION [=] {'Y' | 'N'}] InnoDB and NDB: [ENGINE [=]engine_name]
This statement is used with NDB and
InnoDB tablespaces. It can be used to add a new
data file to, or to drop a data file from an
NDB tablespace. It can also be used to rename
an NDB Cluster Disk Data tablespace, rename an
InnoDB general tablespace, encrypt an
InnoDB general tablespace, or mark an
InnoDB undo tablespace as active or inactive.
The UNDO keyword, introduced in MySQL 8.0.14,
is used with the SET {ACTIVE | INACTIVE} clause
to mark an InnoDB undo tablespace as active or
inactive. For more information, see
Section 15.6.3.4, “Undo Tablespaces”.
The ADD DATAFILE variant enables you to specify
an initial size for an NDB Disk Data tablespace
using an INITIAL_SIZE clause, where
size is measured in bytes; the default
value is 134217728 (128 MB). You may optionally follow
size with a one-letter abbreviation for
an order of magnitude, similar to those used in
my.cnf. Generally, this is one of the letters
M (megabytes) or G
(gigabytes).
On 32-bit systems, the maximum supported value for
INITIAL_SIZE is 4294967296 (4 GB). (Bug #29186)
INITIAL_SIZE is rounded, explicitly, as for
CREATE TABLESPACE.
Once a data file has been created, its size cannot be changed;
however, you can add more data files to an NDB tablespace using
additional ALTER TABLESPACE ... ADD DATAFILE
statements.
When ALTER TABLESPACE ... ADD DATAFILE is used
with ENGINE = NDB, a data file is created on
each Cluster data node, but only one row is generated in the
INFORMATION_SCHEMA.FILES table. See
the description of this table, as well as
Section 22.5.10.1, “NDB Cluster Disk Data Objects”, for more
information. ADD DATAFILE is not supported with
InnoDB tablespaces.
Using DROP DATAFILE with
ALTER TABLESPACE drops the data
file 'file_name' from an NDB
tablespace. You cannot drop a data file from a tablespace which is
in use by any table; in other words, the data file must be empty
(no extents used). See
Section 22.5.10.1, “NDB Cluster Disk Data Objects”. In addition,
any data file to be dropped must previously have been added to the
tablespace with CREATE TABLESPACE
or ALTER TABLESPACE. DROP
DATAFILE is not supported with InnoDB
tablespaces.
WAIT is parsed but otherwise ignored. It is
intended for future expansion.
The ENGINE clause, which specifies the storage
engine used by the tablespace, is deprecated and will be removed
in a future release. The tablespace storage engine is known by the
data dictionary, making the ENGINE clause
obsolete. If the storage engine is specified, it must match the
tablespace storage engine defined in the data dictionary. The only
values for engine_name compatible with
NDB tablespaces are
NDB and NDBCLUSTER.
RENAME TO operations are implicitly performed
in autocommit mode, regardless of
the autocommit setting.
A RENAME TO operation cannot be performed while
LOCK TABLES or
FLUSH TABLES WITH READ
LOCK is in effect for tables that reside in the
tablespace.
Exclusive metadata locks are taken on tables that reside in a general tablespace while the tablespace is renamed, which prevents concurrent DDL. Concurrent DML is supported.
The CREATE TABLESPACE privilege is
required to rename an InnoDB general
tablespace.
The ENCRYPTION clause enables or disables
page-level data encryption for an InnoDB
general tablespace or the mysql system
tablespace. Encryption support for general tablespaces was
introduced in MySQL 8.0.13. Encryption support for the
mysql system tablespace was introduced in MySQL
8.0.16.
A keyring plugin must be installed and configured before encryption can be enabled.
As of MySQL 8.0.16, if the
table_encryption_privilege_check
variable is enabled, the
TABLE_ENCRYPTION_ADMIN privilege is
required to alter a general tablespace with an
ENCRYPTION clause setting that differs from the
default_table_encryption setting.
Enabling encryption for a general tablespace fails if any table in
the tablespace belongs to a schema defined with
DEFAULT
ENCRYPTION='N'. Similarly, disabling encryption fails if
any table in the general tablespace belongs to a schema defined
with DEFAULT
ENCRYPTION='Y'. The
DEFAULT
ENCRYPTION schema option was introduced in MySQL 8.0.16.
If an ALTER TABLESPACE statement
executed on a general tablespace does not include an
ENCRYPTION clause, the tablespace retains its
current encryption status, regardless of the
default_table_encryption setting.
When a general tablespace or the mysql system
tablespace is encrypted, all tables residing in the tablespace are
encrypted. Likewise, a table created in an encrypted tablespace is
encrypted.
The INPLACE algorithm is used when altering the
ENCRYPTION attribute of a general tablespace or
the mysql system tablespace. The
INPLACE algorithm permits concurrent DML on
tables that reside in the tablespace. Concurrent DDL is blocked.
For more information, see Section 15.13, “InnoDB Data-at-Rest Encryption”.
ALTER
[ALGORITHM = {UNDEFINED | MERGE | TEMPTABLE}]
[DEFINER = user]
[SQL SECURITY { DEFINER | INVOKER }]
VIEW view_name [(column_list)]
AS select_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]
This statement changes the definition of a view, which must exist.
The syntax is similar to that for CREATE
VIEW see Section 13.1.23, “CREATE VIEW Statement”). This statement
requires the CREATE VIEW and
DROP privileges for the view, and
some privilege for each column referred to in the
SELECT statement.
ALTER VIEW is permitted only to the
definer or users with the
SET_USER_ID privilege (or the
deprecated SUPER privilege).
CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name
[create_option] ...
create_option: {
[DEFAULT] CHARACTER SET [=] charset_name
| [DEFAULT] COLLATE [=] collation_name
| [DEFAULT] ENCRYPTION [=] {'Y' | 'N'}
}
CREATE DATABASE creates a database
with the given name. To use this statement, you need the
CREATE privilege for the database.
CREATE
SCHEMA is a synonym for CREATE
DATABASE.
An error occurs if the database exists and you did not specify
IF NOT EXISTS.
CREATE DATABASE is not permitted
within a session that has an active LOCK
TABLES statement.
Each create_option specifies a database
characteristic. Database characteristics are stored in the data
dictionary.
The CHARACTER SET option specifies the
default database character set. The COLLATE
option specifies the default database collation. For
information about character set and collation names, see
Chapter 10, Character Sets, Collations, Unicode.
To see the available character sets and collations, use the
the SHOW CHARACTER SET and
SHOW COLLATION statements,
respectively. See Section 13.7.7.3, “SHOW CHARACTER SET Statement”, and
Section 13.7.7.4, “SHOW COLLATION Statement”.
The ENCRYPTION option, introduced in MySQL
8.0.16, defines the default database encryption, which is
inherited by tables created in the database. The permitted
values are 'Y' (encryption enabled) and
'N' (encryption disabled). If the
ENCRYPTION option is not specified, the
value of the
default_table_encryption
system variable defines the default database encryption. If
the
table_encryption_privilege_check
system variable is enabled, the
TABLE_ENCRYPTION_ADMIN
privilege is required to specify a default encryption setting
that differs from the
default_table_encryption
setting. For more information, see
Defining an Encryption Default for Schemas and General Tablespaces.
A database in MySQL is implemented as a directory containing files
that correspond to tables in the database. Because there are no
tables in a database when it is initially created, the
CREATE DATABASE statement creates
only a directory under the MySQL data directory. Rules for
permissible database names are given in
Section 9.2, “Schema Object Names”. If a database name contains special
characters, the name for the database directory contains encoded
versions of those characters as described in
Section 9.2.4, “Mapping of Identifiers to File Names”.
Creating a database directory by manually creating a directory under the data directory (for example, with mkdir) is unsupported in MySQL 8.0.
When you create a database, let the server manage the directory and the files in it. Manipulating database directories and files directly can cause inconsistencies and unexpected results.
MySQL has no limit on the number of databases. The underlying file system may have a limit on the number of directories.
You can also use the mysqladmin program to create databases. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
CREATE
[DEFINER = user]
EVENT
[IF NOT EXISTS]
event_name
ON SCHEDULE schedule
[ON COMPLETION [NOT] PRESERVE]
[ENABLE | DISABLE | DISABLE ON SLAVE]
[COMMENT 'string']
DO event_body;
schedule: {
AT timestamp [+ INTERVAL interval] ...
| EVERY interval
[STARTS timestamp [+ INTERVAL interval] ...]
[ENDS timestamp [+ INTERVAL interval] ...]
}
interval:
quantity {YEAR | QUARTER | MONTH | DAY | HOUR | MINUTE |
WEEK | SECOND | YEAR_MONTH | DAY_HOUR | DAY_MINUTE |
DAY_SECOND | HOUR_MINUTE | HOUR_SECOND | MINUTE_SECOND}
This statement creates and schedules a new event. The event will not run unless the Event Scheduler is enabled. For information about checking Event Scheduler status and enabling it if necessary, see Section 24.4.2, “Event Scheduler Configuration”.
CREATE EVENT requires the
EVENT privilege for the schema in
which the event is to be created. If the
DEFINER clause is present, the privileges
required depend on the user value, as
discussed in Section 24.6, “Stored Object Access Control”.
The minimum requirements for a valid CREATE
EVENT statement are as follows:
The keywords CREATE EVENT plus
an event name, which uniquely identifies the event in a
database schema.
An ON SCHEDULE clause, which determines
when and how often the event executes.
A DO clause, which contains the
SQL statement to be executed by an event.
This is an example of a minimal CREATE
EVENT statement:
CREATE EVENT myevent
ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR
DO
UPDATE myschema.mytable SET mycol = mycol + 1;
The previous statement creates an event named
myevent. This event executes once—one
hour following its creation—by running an SQL statement that
increments the value of the myschema.mytable
table's mycol column by 1.
The event_name must be a valid MySQL
identifier with a maximum length of 64 characters. Event names are
not case-sensitive, so you cannot have two events named
myevent and MyEvent in the
same schema. In general, the rules governing event names are the
same as those for names of stored routines. See
Section 9.2, “Schema Object Names”.
An event is associated with a schema. If no schema is indicated as
part of event_name, the default
(current) schema is assumed. To create an event in a specific
schema, qualify the event name with a schema using
syntax.
schema_name.event_name
The DEFINER clause specifies the MySQL account
to be used when checking access privileges at event execution
time. If the DEFINER clause is present, the
user value should be a MySQL account
specified as
',
user_name'@'host_name'CURRENT_USER, or
CURRENT_USER(). The permitted
user values depend on the privileges
you hold, as discussed in
Section 24.6, “Stored Object Access Control”. Also see that section
for additional information about event security.
If the DEFINER clause is omitted, the default
definer is the user who executes the CREATE
EVENT statement. This is the same as specifying
DEFINER = CURRENT_USER explicitly.
Within an event body, the
CURRENT_USER function returns the
account used to check privileges at event execution time, which is
the DEFINER user. For information about user
auditing within events, see
Section 6.2.22, “SQL-Based Account Activity Auditing”.
IF NOT EXISTS has the same meaning for
CREATE EVENT as for
CREATE TABLE: If an event named
event_name already exists in the same
schema, no action is taken, and no error results. (However, a
warning is generated in such cases.)
The ON SCHEDULE clause determines when, how
often, and for how long the event_body
defined for the event repeats. This clause takes one of two forms:
AT is
used for a one-time event. It specifies that the event
executes one time only at the date and time given by
timestamptimestamp, which must include both
the date and time, or must be an expression that resolves to a
datetime value. You may use a value of either the
DATETIME or
TIMESTAMP type for this
purpose. If the date is in the past, a warning occurs, as
shown here:
mysql>SELECT NOW();+---------------------+ | NOW() | +---------------------+ | 2006-02-10 23:59:01 | +---------------------+ 1 row in set (0.04 sec) mysql>CREATE EVENT e_totals->ON SCHEDULE AT '2006-02-10 23:59:00'->DO INSERT INTO test.totals VALUES (NOW());Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G*************************** 1. row *************************** Level: Note Code: 1588 Message: Event execution time is in the past and ON COMPLETION NOT PRESERVE is set. The event was dropped immediately after creation.
CREATE EVENT statements which
are themselves invalid—for whatever reason—fail
with an error.
You may use CURRENT_TIMESTAMP
to specify the current date and time. In such a case, the
event acts as soon as it is created.
To create an event which occurs at some point in the future
relative to the current date and time—such as that
expressed by the phrase “three weeks from
now”—you can use the optional clause +
INTERVAL . The
intervalinterval portion consists of two
parts, a quantity and a unit of time, and follows the syntax
rules described in Temporal Intervals,
except that you cannot use any units keywords that involving
microseconds when defining an event. With some interval types,
complex time units may be used. For example, “two
minutes and ten seconds” can be expressed as +
INTERVAL '2:10' MINUTE_SECOND.
You can also combine intervals. For example, AT
CURRENT_TIMESTAMP + INTERVAL 3 WEEK + INTERVAL 2 DAY
is equivalent to “three weeks and two days from
now”. Each portion of such a clause must begin with
+ INTERVAL.
To repeat actions at a regular interval, use an
EVERY clause. The EVERY
keyword is followed by an interval
as described in the previous discussion of the
AT keyword. (+ INTERVAL
is not used with
EVERY.) For example, EVERY 6
WEEK means “every six weeks”.
Although + INTERVAL clauses are not
permitted in an EVERY clause, you can use
the same complex time units permitted in a +
INTERVAL.
An EVERY clause may contain an optional
STARTS clause. STARTS is
followed by a timestamp value that
indicates when the action should begin repeating, and may also
use + INTERVAL
to specify an
amount of time “from now”. For example,
intervalEVERY 3 MONTH STARTS CURRENT_TIMESTAMP + INTERVAL 1
WEEK means “every three months, beginning one
week from now”. Similarly, you can express “every
two weeks, beginning six hours and fifteen minutes from
now” as EVERY 2 WEEK STARTS CURRENT_TIMESTAMP
+ INTERVAL '6:15' HOUR_MINUTE. Not specifying
STARTS is the same as using STARTS
CURRENT_TIMESTAMP—that is, the action
specified for the event begins repeating immediately upon
creation of the event.
An EVERY clause may contain an optional
ENDS clause. The ENDS
keyword is followed by a timestamp
value that tells MySQL when the event should stop repeating.
You may also use + INTERVAL
with
intervalENDS; for instance, EVERY 12 HOUR
STARTS CURRENT_TIMESTAMP + INTERVAL 30 MINUTE ENDS
CURRENT_TIMESTAMP + INTERVAL 4 WEEK is equivalent to
“every twelve hours, beginning thirty minutes from now,
and ending four weeks from now”. Not using
ENDS means that the event continues
executing indefinitely.
ENDS supports the same syntax for complex
time units as STARTS does.
You may use STARTS,
ENDS, both, or neither in an
EVERY clause.
If a repeating event does not terminate within its scheduling
interval, the result may be multiple instances of the event
executing simultaneously. If this is undesirable, you should
institute a mechanism to prevent simultaneous instances. For
example, you could use the
GET_LOCK() function, or row or
table locking.
The ON SCHEDULE clause may use expressions
involving built-in MySQL functions and user variables to obtain
any of the timestamp or
interval values which it contains. You
may not use stored functions or user-defined functions in such
expressions, nor may you use any table references; however, you
may use SELECT FROM DUAL. This is true for both
CREATE EVENT and
ALTER EVENT statements. References
to stored functions, user-defined functions, and tables in such
cases are specifically not permitted, and fail with an error (see
Bug #22830).
Times in the ON SCHEDULE clause are interpreted
using the current session
time_zone value. This becomes the
event time zone; that is, the time zone that is used for event
scheduling and is in effect within the event as it executes. These
times are converted to UTC and stored along with the event time
zone internally. This enables event execution to proceed as
defined regardless of any subsequent changes to the server time
zone or daylight saving time effects. For additional information
about representation of event times, see
Section 24.4.4, “Event Metadata”. See also
Section 13.7.7.18, “SHOW EVENTS Statement”, and Section 25.13, “The INFORMATION_SCHEMA EVENTS Table”.
Normally, once an event has expired, it is immediately dropped.
You can override this behavior by specifying ON
COMPLETION PRESERVE. Using ON COMPLETION NOT
PRESERVE merely makes the default nonpersistent behavior
explicit.
You can create an event but prevent it from being active using the
DISABLE keyword. Alternatively, you can use
ENABLE to make explicit the default status,
which is active. This is most useful in conjunction with
ALTER EVENT (see
Section 13.1.3, “ALTER EVENT Statement”).
A third value may also appear in place of
ENABLE or DISABLE;
DISABLE ON SLAVE is set for the status of an
event on a replication slave to indicate that the event was
created on the master and replicated to the slave, but is not
executed on the slave. See
Section 17.5.1.16, “Replication of Invoked Features”.
You may supply a comment for an event using a
COMMENT clause.
comment may be any string of up to 64
characters that you wish to use for describing the event. The
comment text, being a string literal, must be surrounded by
quotation marks.
The DO clause specifies an action
carried by the event, and consists of an SQL statement. Nearly any
valid MySQL statement that can be used in a stored routine can
also be used as the action statement for a scheduled event. (See
Section 24.8, “Restrictions on Stored Programs”.) For example, the
following event e_hourly deletes all rows from
the sessions table once per hour, where this
table is part of the site_activity schema:
CREATE EVENT e_hourly
ON SCHEDULE
EVERY 1 HOUR
COMMENT 'Clears out sessions table each hour.'
DO
DELETE FROM site_activity.sessions;
MySQL stores the sql_mode system
variable setting in effect when an event is created or altered,
and always executes the event with this setting in force,
regardless of the current server SQL mode when the event
begins executing.
A CREATE EVENT statement that
contains an ALTER EVENT statement
in its DO clause appears to
succeed; however, when the server attempts to execute the
resulting scheduled event, the execution fails with an error.
Statements such as SELECT or
SHOW that merely return a result
set have no effect when used in an event; the output from these
is not sent to the MySQL Monitor, nor is it stored anywhere.
However, you can use statements such as
SELECT ...
INTO and
INSERT INTO ...
SELECT that store a result. (See the next example in
this section for an instance of the latter.)
The schema to which an event belongs is the default schema for
table references in the DO clause.
Any references to tables in other schemas must be qualified with
the proper schema name.
As with stored routines, you can use compound-statement syntax in
the DO clause by using the
BEGIN and END keywords, as
shown here:
delimiter |
CREATE EVENT e_daily
ON SCHEDULE
EVERY 1 DAY
COMMENT 'Saves total number of sessions then clears the table each day'
DO
BEGIN
INSERT INTO site_activity.totals (time, total)
SELECT CURRENT_TIMESTAMP, COUNT(*)
FROM site_activity.sessions;
DELETE FROM site_activity.sessions;
END |
delimiter ;
This example uses the delimiter command to
change the statement delimiter. See
Section 24.1, “Defining Stored Programs”.
More complex compound statements, such as those used in stored routines, are possible in an event. This example uses local variables, an error handler, and a flow control construct:
delimiter |
CREATE EVENT e
ON SCHEDULE
EVERY 5 SECOND
DO
BEGIN
DECLARE v INTEGER;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN END;
SET v = 0;
WHILE v < 5 DO
INSERT INTO t1 VALUES (0);
UPDATE t2 SET s1 = s1 + 1;
SET v = v + 1;
END WHILE;
END |
delimiter ;There is no way to pass parameters directly to or from events; however, it is possible to invoke a stored routine with parameters within an event:
CREATE EVENT e_call_myproc
ON SCHEDULE
AT CURRENT_TIMESTAMP + INTERVAL 1 DAY
DO CALL myproc(5, 27);If an event's definer has privileges sufficient to set global system variables (see Section 5.1.9.1, “System Variable Privileges”), the event can read and write global variables. As granting such privileges entails a potential for abuse, extreme care must be taken in doing so.
Generally, any statements that are valid in stored routines may be used for action statements executed by events. For more information about statements permissible within stored routines, see Section 24.2.1, “Stored Routine Syntax”. You can create an event as part of a stored routine, but an event cannot be created by another event.
The CREATE FUNCTION statement is
used to create stored functions and user-defined functions (UDFs):
For information about creating stored functions, see Section 13.1.17, “CREATE PROCEDURE and CREATE FUNCTION Statements”.
For information about creating user-defined functions, see Section 13.7.4.1, “CREATE FUNCTION Syntax for User-Defined Functions”.
CREATE [UNIQUE | FULLTEXT | SPATIAL] INDEXindex_name[index_type] ONtbl_name(key_part,...) [index_option] [algorithm_option|lock_option] ...key_part: {col_name[(length)] | (expr)} [ASC | DESC]index_option: { KEY_BLOCK_SIZE [=]value|index_type| WITH PARSERparser_name| COMMENT 'string' | {VISIBLE | INVISIBLE} }index_type: USING {BTREE | HASH}algorithm_option: ALGORITHM [=] {DEFAULT | INPLACE | COPY}lock_option: LOCK [=] {DEFAULT | NONE | SHARED | EXCLUSIVE}
Normally, you create all indexes on a table at the time the table
itself is created with CREATE
TABLE. See Section 13.1.20, “CREATE TABLE Statement”. This
guideline is especially important for
InnoDB tables, where the primary key
determines the physical layout of rows in the data file.
CREATE INDEX enables you to add
indexes to existing tables.
CREATE INDEX is mapped to an
ALTER TABLE statement to create
indexes. See Section 13.1.9, “ALTER TABLE Statement”.
CREATE INDEX cannot be used to
create a PRIMARY KEY; use
ALTER TABLE instead. For more
information about indexes, see Section 8.3.1, “How MySQL Uses Indexes”.
InnoDB supports secondary indexes on
virtual columns. For more information, see
Section 13.1.20.9, “Secondary Indexes and Generated Columns”.
When the innodb_stats_persistent
setting is enabled, run the ANALYZE
TABLE statement for an
InnoDB table after creating an index
on that table.
Beginning with MySQL 8.0.17, the expr
for a key_part specification can take
the form (CAST to create a
multi-valued index on a json_expression
AS type ARRAY)JSON
column. See Multi-Valued Indexes.
An index specification of the form
( creates an
index with multiple key parts. Index key values are formed by
concatenating the values of the given key parts. For example
key_part1,
key_part2, ...)(col1, col2, col3) specifies a multiple-column
index with index keys consisting of values from
col1, col2, and
col3.
A key_part specification can end with
ASC or DESC to specify
whether index values are stored in ascending or descending order.
The default is ascending if no order specifier is given.
ASC and DESC are not
permitted for HASH indexes.
ASC and DESC are also not
supported for multi-valued indexes. As of MySQL 8.0.12,
ASC and DESC are not
permitted for SPATIAL indexes.
The following sections describe different aspects of the
CREATE INDEX statement:
For string columns, indexes can be created that use only the
leading part of column values, using
syntax to specify an index prefix length:
col_name(length)
Prefixes can be specified for
CHAR,
VARCHAR,
BINARY, and
VARBINARY key parts.
Prefixes must be specified for
BLOB and
TEXT key parts. Additionally,
BLOB and
TEXT columns can be indexed
only for InnoDB,
MyISAM, and BLACKHOLE
tables.
Prefix limits are measured in bytes.
However, prefix lengths for index
specifications in CREATE
TABLE, ALTER TABLE,
and CREATE INDEX statements
are interpreted as number of characters for nonbinary string
types (CHAR,
VARCHAR,
TEXT) and number of bytes for
binary string types (BINARY,
VARBINARY,
BLOB). Take this into account
when specifying a prefix length for a nonbinary string
column that uses a multibyte character set.
Prefix support and lengths of prefixes (where supported) are
storage engine dependent. For example, a prefix can be up to
767 bytes long for InnoDB
tables that use the
REDUNDANT
or
COMPACT
row format. The prefix length limit is 3072 bytes for
InnoDB tables that use the
DYNAMIC
or
COMPRESSED
row format. For MyISAM tables,
the prefix length limit is 1000 bytes. The
NDB storage engine does not
support prefixes (see
Section 22.1.7.6, “Unsupported or Missing Features in NDB Cluster”).
If a specified index prefix exceeds the maximum column data type
size, CREATE INDEX handles the
index as follows:
For a nonunique index, either an error occurs (if strict SQL mode is enabled), or the index length is reduced to lie within the maximum column data type size and a warning is produced (if strict SQL mode is not enabled).
For a unique index, an error occurs regardless of SQL mode because reducing the index length might enable insertion of nonunique entries that do not meet the specified uniqueness requirement.
The statement shown here creates an index using the first 10
characters of the name column (assuming that
name has a nonbinary string type):
CREATE INDEX part_of_name ON customer (name(10));
If names in the column usually differ in the first 10
characters, lookups performed using this index should not be
much slower than using an index created from the entire
name column. Also, using column prefixes for
indexes can make the index file much smaller, which could save a
lot of disk space and might also speed up
INSERT operations.
A “normal” index indexes column values or prefixes
of column values. For example, in the following table, the index
entry for a given t1 row includes the full
col1 value and a prefix of the
col2 value consisting of its first 10
characters:
CREATE TABLE t1 ( col1 VARCHAR(10), col2 VARCHAR(20), INDEX (col1, col2(10)) );
MySQL 8.0.13 and higher supports functional key parts that index expression values rather than column or column prefix values. Use of functional key parts enables indexing of values not stored directly in the table. Examples:
CREATE TABLE t1 (col1 INT, col2 INT, INDEX func_index ((ABS(col1)))); CREATE INDEX idx1 ON t1 ((col1 + col2)); CREATE INDEX idx2 ON t1 ((col1 + col2), (col1 - col2), col1); ALTER TABLE t1 ADD INDEX ((col1 * 40) DESC);
An index with multiple key parts can mix nonfunctional and functional key parts.
ASC and DESC are supported
for functional key parts.
Functional key parts must adhere to the following rules. An error occurs if a key part definition contains disallowed constructs.
In index definitions, enclose expressions within parentheses to distinguish them from columns or column prefixes. For example, this is permitted; the expressions are enclosed within parentheses:
INDEX ((col1 + col2), (col3 - col4))
This produces an error; the expressions are not enclosed within parentheses:
INDEX (col1 + col2, col3 - col4)
A functional key part cannot consist solely of a column name. For example, this is not permitted:
INDEX ((col1), (col2))
Instead, write the key parts as nonfunctional key parts, without parentheses:
INDEX (col1, col2)
A functional key part expression cannot refer to column
prefixes. For a workaround, see the discussion of
SUBSTRING() and
CAST() later in this section.
Functional key parts are not permitted in foreign key specifications.
For CREATE
TABLE ... LIKE, the destination table preserves
functional key parts from the original table.
Functional indexes are implemented as hidden virtual generated columns, which has these implications:
Each functional key part counts against the limit on total number of table columns; see Section 8.4.7, “Limits on Table Column Count and Row Size”.
Functional key parts inherit all restrictions that apply to generated columns. Examples:
Only functions permitted for generated columns are permitted for functional key parts.
Subqueries, parameters, variables, stored functions, and user-defined functions are not permitted.
For more information about applicable restrictions, see Section 13.1.20.8, “CREATE TABLE and Generated Columns”, and Section 13.1.9.2, “ALTER TABLE and Generated Columns”.
The virtual generated column itself requires no storage. The index itself takes up storage space as any other index.
UNIQUE is supported for indexes that include
functional key parts. However, primary keys cannot include
functional key parts. A primary key requires the generated
column to be stored, but functional key parts are implemented as
virtual generated columns, not stored generated columns.
SPATIAL and FULLTEXT
indexes cannot have functional key parts.
If a table contains no primary key, InnoDB
automatically promotes the first UNIQUE NOT
NULL index to the primary key. This is not supported
for UNIQUE NOT NULL indexes that have
functional key parts.
Nonfunctional indexes raise a warning if there are duplicate indexes. Indexes that contain functional key parts do not have this feature.
To remove a column that is referenced by a functional key part, the index must be removed first. Otherwise, an error occurs.
Although nonfunctional key parts support a prefix length
specification, this is not possible for functional key parts.
The solution is to use
SUBSTRING() (or
CAST(), as described later in
this section). For a functional key part containing the
SUBSTRING() function to be used
in a query, the WHERE clause must contain
SUBSTRING() with the same
arguments. In the following example, only the second
SELECT is able to use the index
because that is the only query in which the arguments to
SUBSTRING() match the index
specification:
CREATE TABLE tbl ( col1 LONGTEXT, INDEX idx1 ((SUBSTRING(col1, 1, 10))) ); SELECT * FROM tbl WHERE SUBSTRING(col1, 1, 9) = '123456789'; SELECT * FROM tbl WHERE SUBSTRING(col1, 1, 10) = '1234567890';
Functional key parts enable indexing of values that cannot be
indexed otherwise, such as JSON
values. However, this must be done correctly to achieve the
desired effect. For example, this syntax does not work:
CREATE TABLE employees ( data JSON, INDEX ((data->>'$.name')) );
The syntax fails because:
The
->>
operator translates into
JSON_UNQUOTE(JSON_EXTRACT(...)).
JSON_UNQUOTE() returns a
value with a data type of
LONGTEXT, and the hidden
generated column thus is assigned the same data type.
MySQL cannot index LONGTEXT
columns specified without a prefix length on the key part,
and prefix lengths are not permitted in functional key
parts.
To index the JSON column, you could try using
the CAST() function as follows:
CREATE TABLE employees ( data JSON, INDEX ((CAST(data->>'$.name' AS CHAR(30)))) );
The hidden generated column is assigned the
VARCHAR(30) data type, which can
be indexed. But this approach produces a new issue when trying
to use the index:
CAST() returns a string with
the collation utf8mb4_0900_ai_ci (the
server default collation).
JSON_UNQUOTE() returns a
string with the collation utf8mb4_bin
(hard coded).
As a result, there is a collation mismatch between the indexed
expression in the preceding table definition and the
WHERE clause expression in the following
query:
SELECT * FROM employees WHERE data->>'$.name' = 'James';
The index is not used because the expressions in the query and
the index differ. To support this kind of scenario for
functional key parts, the optimizer automatically strips
CAST() when looking for an index
to use, but only if the collation of the
indexed expression matches that of the query expression. For an
index with a functional key part to be used, either of the
following two solutions work (although they differ somewhat in
effect):
Solution 1. Assign the indexed expression the same collation
as JSON_UNQUOTE():
CREATE TABLE employees (
data JSON,
INDEX idx ((CAST(data->>"$.name" AS CHAR(30)) COLLATE utf8mb4_bin))
);
INSERT INTO employees VALUES
('{ "name": "james", "salary": 9000 }'),
('{ "name": "James", "salary": 10000 }'),
('{ "name": "Mary", "salary": 12000 }'),
('{ "name": "Peter", "salary": 8000 }');
SELECT * FROM employees WHERE data->>'$.name' = 'James';
The ->> operator is the same as
JSON_UNQUOTE(JSON_EXTRACT(...)), and
JSON_UNQUOTE() returns a string with
collation utf8mb4_bin. The comparison is
thus case sensitive, and only one row matches:
+------------------------------------+
| data |
+------------------------------------+
| {"name": "James", "salary": 10000} |
+------------------------------------+Solution 2. Specify the full expression in the query:
CREATE TABLE employees (
data JSON,
INDEX idx ((CAST(data->>"$.name" AS CHAR(30))))
);
INSERT INTO employees VALUES
('{ "name": "james", "salary": 9000 }'),
('{ "name": "James", "salary": 10000 }'),
('{ "name": "Mary", "salary": 12000 }'),
('{ "name": "Peter", "salary": 8000 }');
SELECT * FROM employees WHERE CAST(data->>'$.name' AS CHAR(30)) = 'James';
CAST() returns a string with collation
utf8mb4_0900_ai_ci, so the comparison
case-insensitive and two rows match:
+------------------------------------+
| data |
+------------------------------------+
| {"name": "james", "salary": 9000} |
| {"name": "James", "salary": 10000} |
+------------------------------------+
Be aware that although the optimizer supports automatically
stripping CAST() with indexed
generated columns, the following approach does not work because
it produces a different result with and without an index
(Bug#27337092):
mysql>CREATE TABLE employees (data JSON,generated_col VARCHAR(30) AS (CAST(data->>'$.name' AS CHAR(30))));Query OK, 0 rows affected, 1 warning (0.03 sec) mysql>INSERT INTO employees (data)VALUES ('{"name": "james"}'), ('{"name": "James"}');Query OK, 2 rows affected, 1 warning (0.01 sec) Records: 2 Duplicates: 0 Warnings: 1 mysql>SELECT * FROM employees WHERE data->>'$.name' = 'James';+-------------------+---------------+ | data | generated_col | +-------------------+---------------+ | {"name": "James"} | James | +-------------------+---------------+ 1 row in set (0.00 sec) mysql>ALTER TABLE employees ADD INDEX idx (generated_col);Query OK, 0 rows affected, 1 warning (0.03 sec) Records: 0 Duplicates: 0 Warnings: 1 mysql>SELECT * FROM employees WHERE data->>'$.name' = 'James';+-------------------+---------------+ | data | generated_col | +-------------------+---------------+ | {"name": "james"} | james | | {"name": "James"} | James | +-------------------+---------------+ 2 rows in set (0.01 sec)
A UNIQUE index creates a constraint such that
all values in the index must be distinct. An error occurs if you
try to add a new row with a key value that matches an existing
row. If you specify a prefix value for a column in a
UNIQUE index, the column values must be
unique within the prefix length. A UNIQUE
index permits multiple NULL values for
columns that can contain NULL.
If a table has a PRIMARY KEY or
UNIQUE NOT NULL index that consists of a
single column that has an integer type, you can use
_rowid to refer to the indexed column in
SELECT statements, as follows:
_rowid refers to the PRIMARY
KEY column if there is a PRIMARY
KEY consisting of a single integer column. If
there is a PRIMARY KEY but it does not
consist of a single integer column,
_rowid cannot be used.
Otherwise, _rowid refers to the column in
the first UNIQUE NOT NULL index if that
index consists of a single integer column. If the first
UNIQUE NOT NULL index does not consist of
a single integer column, _rowid cannot be
used.
FULLTEXT indexes are supported only for
InnoDB and
MyISAM tables and can include only
CHAR,
VARCHAR, and
TEXT columns. Indexing always
happens over the entire column; column prefix indexing is not
supported and any prefix length is ignored if specified. See
Section 12.9, “Full-Text Search Functions”, for details of operation.
As of MySQL 8.0.17, InnoDB supports
multi-valued indexes. A multi-valued index is a secondary index
defined on a column that stores an array of values. A
“normal” index has one index record for each data
record (1:1). A multi-valued index can have multiple index
records for a single data record (N:1). Multi-valued indexes are
intended for indexing JSON arrays. For
example, a multi-valued index defined on the array of zip codes
in the following JSON document creates an index record for each
zip code, with each index record referencing the same data
record.
{
"user":"Bob",
"user_id":31,
"zipcode":[94477,94536]
}
You can create a multi-valued index in a
CREATE TABLE,
ALTER TABLE, or
CREATE INDEX statement. This
requires using CAST(... AS ...
ARRAY) in the index definition, which casts same-typed
scalar values in a JSON array to an SQL data
type array. A virtual column is then generated transparently
with the values in the SQL data type array; finally, a
functional index (also referred to as a virtual index) is
created on the virtual column. It is the functional index
defined on the virtual column of values from the SQL data type
array that forms the multi-valued index.
The examples in the following list show the three different ways
in which a multi-valued index zips can be
created on an array $.zipcode on a
JSON column custinfo in a
table named customers. In each case, the JSON
array is cast to an SQL data type array of
UNSIGNED integer values.
CREATE TABLE only:
CREATE TABLE customers (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
custinfo JSON,
INDEX zips( (CAST(custinfo->'$.zip' AS UNSIGNED ARRAY)) )
);
CREATE TABLE plus ALTER
TABLE:
CREATE TABLE customers (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
custinfo JSON
);
ALTER TABLE customers ADD INDEX zips( (CAST(custinfo->'$.zip' AS UNSIGNED ARRAY)) );
CREATE TABLE plus CREATE
INDEX:
CREATE TABLE customers (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
custinfo JSON
);
CREATE INDEX zips ON customers ( (CAST(custinfo->'$.zip' AS UNSIGNED ARRAY)) );
A multi-valued index can also be defined as part of a composite
index. This example shows a composite index that includes two
single-valued parts (for the id and
modified columns), and one multi-valued part
(for the custinfo column):
CREATE TABLE customers (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
custinfo JSON
);
ALTER TABLE customers ADD INDEX comp(id, modified,
(CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)) );
Only one multi-valued key part can be used in a composite index.
The multi-valued key part may be used in any order relative to
the other parts of the key. In other words, the ALTER
TABLE statement just shown could have used
comp(id, (CAST(custinfo->'$.zipcode' AS UNSIGNED
ARRAY), modified)) (or any other ordering) and still
have been valid.
The optimizer uses a multi-valued index to fetch records when
the following functions are specified in a
WHERE clause:
We can demonstrate this by creating and populating the
customers table using the following
CREATE TABLE and INSERT
statements:
mysql>CREATE TABLE customers (->id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,->modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,->custinfo JSON->);Query OK, 0 rows affected (0.51 sec) mysql>INSERT INTO customers VALUES->(NULL, NOW(), '{"user":"Jack","user_id":37,"zipcode":[94582,94536]}'),->(NULL, NOW(), '{"user":"Jill","user_id":22,"zipcode":[94568,94507,94582]}'),->(NULL, NOW(), '{"user":"Bob","user_id":31,"zipcode":[94477,94507]}'),->(NULL, NOW(), '{"user":"Mary","user_id":72,"zipcode":[94536]}'),->(NULL, NOW(), '{"user":"Ted","user_id":56,"zipcode":[94507,94582]}');Query OK, 5 rows affected (0.07 sec) Records: 5 Duplicates: 0 Warnings: 0
First we execute three queries on the
customers table, one each using
MEMBER OF(),
JSON_CONTAINS(), and
JSON_OVERLAPS(), with the result from each
query shown here:
mysql>SELECT * FROM customers->WHERE 94507 MEMBER OF(custinfo->'$.zipcode');+----+---------------------+-------------------------------------------------------------------+ | id | modified | custinfo | +----+---------------------+-------------------------------------------------------------------+ | 2 | 2019-06-29 22:23:12 | {"user": "Jill", "user_id": 22, "zipcode": [94568, 94507, 94582]} | | 3 | 2019-06-29 22:23:12 | {"user": "Bob", "user_id": 31, "zipcode": [94477, 94507]} | | 5 | 2019-06-29 22:23:12 | {"user": "Ted", "user_id": 56, "zipcode": [94507, 94582]} | +----+---------------------+-------------------------------------------------------------------+ 3 rows in set (0.00 sec) mysql>SELECT * FROM customers->WHERE JSON_CONTAINS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+---------------------+-------------------------------------------------------------------+ | id | modified | custinfo | +----+---------------------+-------------------------------------------------------------------+ | 2 | 2019-06-29 22:23:12 | {"user": "Jill", "user_id": 22, "zipcode": [94568, 94507, 94582]} | | 5 | 2019-06-29 22:23:12 | {"user": "Ted", "user_id": 56, "zipcode": [94507, 94582]} | +----+---------------------+-------------------------------------------------------------------+ 2 rows in set (0.00 sec) mysql>SELECT * FROM customers->WHERE JSON_OVERLAPS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+---------------------+-------------------------------------------------------------------+ | id | modified | custinfo | +----+---------------------+-------------------------------------------------------------------+ | 1 | 2019-06-29 22:23:12 | {"user": "Jack", "user_id": 37, "zipcode": [94582, 94536]} | | 2 | 2019-06-29 22:23:12 | {"user": "Jill", "user_id": 22, "zipcode": [94568, 94507, 94582]} | | 3 | 2019-06-29 22:23:12 | {"user": "Bob", "user_id": 31, "zipcode": [94477, 94507]} | | 5 | 2019-06-29 22:23:12 | {"user": "Ted", "user_id": 56, "zipcode": [94507, 94582]} | +----+---------------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)
Next, we run EXPLAIN on each of
the previous three queries:
mysql>EXPLAIN SELECT * FROM customers->WHERE 94507 MEMBER OF(custinfo->'$.zipcode');+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 100.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) mysql>EXPLAIN SELECT * FROM customers->WHERE JSON_CONTAINS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 100.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) mysql>EXPLAIN SELECT * FROM customers->WHERE JSON_OVERLAPS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 100.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.01 sec)
None of the three queries just shown are able to use any keys.
To solve this problem, we can add a multi-valued index on the
zipcode array in the JSON
column (custinfo), like this:
mysql>ALTER TABLE customers->ADD INDEX zips( (CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)) );Query OK, 0 rows affected (0.47 sec) Records: 0 Duplicates: 0 Warnings: 0
When we run the previous EXPLAIN statements
again, we can now observe that the queries can (and do) use the
index zips that was just created:
mysql>EXPLAIN SELECT * FROM customers->WHERE 94507 MEMBER OF(custinfo->'$.zipcode');+----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | ref | zips | zips | 9 | const | 1 | 100.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+-------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) mysql>EXPLAIN SELECT * FROM customers->WHERE JSON_CONTAINS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | range | zips | zips | 9 | NULL | 6 | 100.00 | Using where | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) mysql>EXPLAIN SELECT * FROM customers->WHERE JSON_OVERLAPS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));+----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | customers | NULL | range | zips | zips | 9 | NULL | 6 | 100.00 | Using where | +----+-------------+-----------+------------+-------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.01 sec)
A multi-valued index can be defined as a unique key. If defined as a unique key, attempting to insert a value already present in the multi-valued index returns a duplicate key error. If duplicate values are already present, attempting to add a unique multi-valued index fails, as shown here:
mysql>ALTER TABLE customers DROP INDEX zips;Query OK, 0 rows affected (0.55 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>ALTER TABLE customers->ADD UNIQUE INDEX zips((CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)));ERROR 1062 (23000): Duplicate entry '[94507, ' for key 'customers.zips' mysql>ALTER TABLE customers->ADD INDEX zips((CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)));Query OK, 0 rows affected (0.36 sec) Records: 0 Duplicates: 0 Warnings: 0
Multi-valued indexes have the additional characteristics listed here:
DML operations that affect multi-valued indexes are handled in the same way as DML operations that affect a normal index, with the only difference being that there may be more than one insert or update for a single clustered index record.
Nullability and multi-valued indexes:
If multi-valued key part has an empty array, no entries are added to the index, and the data record is not accessible by an index scan.
If multi-valued key part generation returns a
NULL value, a single entry containing
NULL is added to the multi-valued
index. If the key part is defined as NOT
NULL, an error is reported.
If the typed array column is set to
NULL, the storage engine stores
single record containing NULL that
points to the data record.
JSON null values are not permitted in
indexed arrays. If any returned value is
NULL, it is treated as a JSON null
and an Invalid JSON value error
is reported.
Because multi-valued indexes are virtual indexes on virtual columns, they must adhere to the same rules as secondary indexes on virtual generated columns.
Index records are not added for empty arrays.
Multi-valued indexes are subject to the limitations and restrictions listed here:
Only one multi-valued key part is permitted per multi-valued
index. However, the CAST(... AS ...
ARRAY) expression can refer to multiple arrays
within a JSON document, as shown here:
CAST(data->'$.arr[*][*]' AS UNSIGNED ARRAY)
In this case, all values matching the JSON expression are stored in the index as a single flat array.
An index with a multi-valued key part does not support
ordering and therefore cannot be used as a primary key. For
the same reason, a multi-valued index cannot be defined
using the ASC or DESC
keyword.
A multi-valued index cannot be a covering index.
The maximum number of values per record for a multi-valued index is determined by the amount of data than can be stored on a single undo log page, which is 65221 bytes (64K minus 315 bytes for overhead), which means that the maximum total length of key values is also 65221 bytes. The maximum number of keys depends on various factors, which prevents defining a specific limit. Tests have shown a multi-valued index to permit as many as 1604 integer keys per record, for example. When the limit is reached, an error similar to the following is reported: ERROR 3905 (HY000): Exceeded max number of values per record for multi-valued index 'idx' by 1 value(s).
The only type of expression that is permitted in a
multi-valued key part is a JSON
expression. The expression need not reference an existing
element in a JSON document inserted into the indexed column,
but must itself be syntactically valid.
Because index records for the same clustered index record are dispersed throughout a multi-valued index, a multi-valued index does not support range scans or index-only scans.
Multi-valued indexes are not permitted in foreign key specifications.
Index prefixes cannot be defined for multi-valued indexes.
Multi-valued indexes cannot be defined on data cast as
BINARY (see the description
of the CAST() function).
Online creation of a multi-value index is not supported,
which means the operation uses
ALGORITHM=COPY. See
Performance and Space Requirements.
Character sets and collations other than the following two combinations of character set and collation are not supported for multi-valued indexes:
The binary character set with the
default binary collation
The utf8mb4 character set with the
default utf8mb4_0900_as_cs collation.
As with other indexes on columns of
InnoDB tables, a multi-valued index
cannot be created with USING HASH;
attempting to do so results in a warning: This
storage engine does not support the HASH index algorithm,
storage engine default was used instead.
(USING BTREE is supported as usual.)
The MyISAM,
InnoDB,
NDB, and
ARCHIVE storage engines support
spatial columns such as POINT and
GEOMETRY.
(Section 11.4, “Spatial Data Types”, describes the spatial data
types.) However, support for spatial column indexing varies
among engines. Spatial and nonspatial indexes on spatial columns
are available according to the following rules.
Spatial indexes on spatial columns have these characteristics:
Available only for InnoDB and
MyISAM tables. Specifying
SPATIAL INDEX for other storage engines
results in an error.
As of MySQL 8.0.12, an index on a spatial column
must be a SPATIAL
index. The SPATIAL keyword is thus
optional but implicit for creating an index on a spatial
column.
Available for single spatial columns only. A spatial index cannot be created over multiple spatial columns.
Indexed columns must be NOT NULL.
Column prefix lengths are prohibited. The full width of each column is indexed.
Not permitted for a primary key or unique index.
Nonspatial indexes on spatial columns (created with
INDEX, UNIQUE, or
PRIMARY KEY) have these characteristics:
Permitted for any storage engine that supports spatial
columns except ARCHIVE.
Columns can be NULL unless the index is a
primary key.
The index type for a non-SPATIAL index
depends on the storage engine. Currently, B-tree is used.
Permitted for a column that can have NULL
values only for InnoDB,
MyISAM, and
MEMORY tables.
Following the key part list, index options can be given. An
index_option value can be any of the
following:
KEY_BLOCK_SIZE [=]
value
For MyISAM tables,
KEY_BLOCK_SIZE optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE value
specified for an individual index definition overrides a
table-level KEY_BLOCK_SIZE value.
KEY_BLOCK_SIZE is not supported at the
index level for InnoDB tables.
See Section 13.1.20, “CREATE TABLE Statement”.
index_type
Some storage engines permit you to specify an index type when creating an index. For example:
CREATE TABLE lookup (id INT) ENGINE = MEMORY; CREATE INDEX id_index ON lookup (id) USING BTREE;
Table 13.1, “Index Types Per Storage Engine”
shows the permissible index type values supported by
different storage engines. Where multiple index types are
listed, the first one is the default when no index type
specifier is given. Storage engines not listed in the table
do not support an index_type
clause in index definitions.
The index_type clause cannot be
used for FULLTEXT INDEX or (prior to
MySQL 8.0.12) SPATIAL INDEX
specifications. Full-text index implementation is storage
engine dependent. Spatial indexes are implemented as R-tree
indexes.
If you specify an index type that is not valid for a given
storage engine, but another index type is available that the
engine can use without affecting query results, the engine
uses the available type. The parser recognizes
RTREE as a type name. As of MySQL 8.0.12,
this is permitted only for SPATIAL
indexes. Prior to 8.0.12, RTREE cannot be
specified for any storage engine.
BTREE indexes are implemented by the
NDB storage engine as T-tree
indexes.
For indexes on NDB table
columns, the USING option can be
specified only for a unique index or primary key.
USING HASH prevents the creation of an
ordered index; otherwise, creating a unique index or
primary key on an NDB table
automatically results in the creation of both an ordered
index and a hash index, each of which indexes the same set
of columns.
For unique indexes that include one or more
NULL columns of an
NDB table, the hash index can
be used only to look up literal values, which means that
IS [NOT] NULL conditions require a full
scan of the table. One workaround is to make sure that a
unique index using one or more NULL
columns on such a table is always created in such a way
that it includes the ordered index; that is, avoid
employing USING HASH when creating the
index.
If you specify an index type that is not valid for a given
storage engine, but another index type is available that the
engine can use without affecting query results, the engine
uses the available type. The parser recognizes
RTREE as a type name, but currently this
cannot be specified for any storage engine.
Use of the index_type option
before the ON
clause is
deprecated; support for use of the option in this position
will be removed in a future MySQL release. If an
tbl_nameindex_type option is given in
both the earlier and later positions, the final option
applies.
TYPE
is recognized as a synonym for type_nameUSING
. However,
type_nameUSING is the preferred form.
The following tables show index characteristics for the
storage engines that support the
index_type option.
Table 13.2 InnoDB Storage Engine Index Characteristics
| Index Class | Index Type | Stores NULL VALUES | Permits Multiple NULL Values | IS NULL Scan Type | IS NOT NULL Scan Type |
|---|---|---|---|---|---|
| Primary key | BTREE |
No | No | N/A | N/A |
| Unique | BTREE |
Yes | Yes | Index | Index |
| Key | BTREE |
Yes | Yes | Index | Index |
FULLTEXT |
N/A | Yes | Yes | Table | Table |
SPATIAL |
N/A | No | No | N/A | N/A |
Table 13.3 MyISAM Storage Engine Index Characteristics
| Index Class | Index Type | Stores NULL VALUES | Permits Multiple NULL Values | IS NULL Scan Type | IS NOT NULL Scan Type |
|---|---|---|---|---|---|
| Primary key | BTREE |
No | No | N/A | N/A |
| Unique | BTREE |
Yes | Yes | Index | Index |
| Key | BTREE |
Yes | Yes | Index | Index |
FULLTEXT |
N/A | Yes | Yes | Table | Table |
SPATIAL |
N/A | No | No | N/A | N/A |
Table 13.4 MEMORY Storage Engine Index Characteristics
| Index Class | Index Type | Stores NULL VALUES | Permits Multiple NULL Values | IS NULL Scan Type | IS NOT NULL Scan Type |
|---|---|---|---|---|---|
| Primary key | BTREE |
No | No | N/A | N/A |
| Unique | BTREE |
Yes | Yes | Index | Index |
| Key | BTREE |
Yes | Yes | Index | Index |
| Primary key | HASH |
No | No | N/A | N/A |
| Unique | HASH |
Yes | Yes | Index | Index |
| Key | HASH |
Yes | Yes | Index | Index |
Table 13.5 NDB Storage Engine Index Characteristics
| Index Class | Index Type | Stores NULL VALUES | Permits Multiple NULL Values | IS NULL Scan Type | IS NOT NULL Scan Type |
|---|---|---|---|---|---|
| Primary key | BTREE |
No | No | Index | Index |
| Unique | BTREE |
Yes | Yes | Index | Index |
| Key | BTREE |
Yes | Yes | Index | Index |
| Primary key | HASH |
No | No | Table (see note 1) | Table (see note 1) |
| Unique | HASH |
Yes | Yes | Table (see note 1) | Table (see note 1) |
| Key | HASH |
Yes | Yes | Table (see note 1) | Table (see note 1) |
Table note:
1. USING HASH prevents creation of an
implicit ordered index.
WITH PARSER
parser_name
This option can be used only with
FULLTEXT indexes. It associates a parser
plugin with the index if full-text indexing and searching
operations need special handling.
InnoDB and
MyISAM support full-text parser
plugins. See Full-Text Parser Plugins and
Section 29.2.4.4, “Writing Full-Text Parser Plugins” for more
information.
COMMENT
'
string'
Index definitions can include an optional comment of up to 1024 characters.
The
MERGE_THRESHOLD
for index pages can be configured for individual indexes
using the index_option
COMMENT clause of the
CREATE INDEX statement. For
example:
CREATE TABLE t1 (id INT); CREATE INDEX id_index ON t1 (id) COMMENT 'MERGE_THRESHOLD=40';
If the page-full percentage for an index page falls below
the MERGE_THRESHOLD value when a row is
deleted or when a row is shortened by an update operation,
InnoDB attempts to merge the
index page with a neighboring index page. The default
MERGE_THRESHOLD value is 50, which is the
previously hardcoded value.
MERGE_THRESHOLD can also be defined at
the index level and table level using
CREATE TABLE and
ALTER TABLE statements. For
more information, see
Section 15.8.11, “Configuring the Merge Threshold for Index Pages”.
VISIBLE, INVISIBLE
Specify index visibility. Indexes are visible by default. An invisible index is not used by the optimizer. Specification of index visibility applies to indexes other than primary keys (either explicit or implicit). For more information, see Section 8.3.12, “Invisible Indexes”.
ALGORITHM and LOCK clauses
may be given to influence the table copying method and level of
concurrency for reading and writing the table while its indexes
are being modified. They have the same meaning as for the
ALTER TABLE statement. For more
information, see Section 13.1.9, “ALTER TABLE Statement”
NDB Cluster supports online operations using the same
ALGORITHM=INPLACE syntax used with the
standard MySQL Server. See
Section 22.5.11, “Online Operations with ALTER TABLE in NDB Cluster”, for more
information.
CREATE LOGFILE GROUPlogfile_groupADD UNDOFILE 'undo_file' [INITIAL_SIZE [=]initial_size] [UNDO_BUFFER_SIZE [=]undo_buffer_size] [REDO_BUFFER_SIZE [=]redo_buffer_size] [NODEGROUP [=]nodegroup_id] [WAIT] [COMMENT [=] 'string'] ENGINE [=]engine_name
This statement creates a new log file group named
logfile_group having a single
UNDO file named
'undo_file'. A
CREATE LOGFILE GROUP statement has
one and only one ADD UNDOFILE clause. For rules
covering the naming of log file groups, see
Section 9.2, “Schema Object Names”.
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group with the same name, or a tablespace and a data file with the same name.
There can be only one log file group per NDB Cluster instance at any given time.
The optional INITIAL_SIZE parameter sets the
UNDO file's initial size; if not specified, it
defaults to 128M (128 megabytes). The optional
UNDO_BUFFER_SIZE parameter sets the size used
by the UNDO buffer for the log file group; The
default value for UNDO_BUFFER_SIZE is
8M (eight megabytes); this value cannot exceed
the amount of system memory available. Both of these parameters
are specified in bytes. You may optionally follow either or both
of these with a one-letter abbreviation for an order of magnitude,
similar to those used in my.cnf. Generally,
this is one of the letters M (for megabytes)
or G (for gigabytes).
Memory used for UNDO_BUFFER_SIZE comes from the
global pool whose size is determined by the value of the
SharedGlobalMemory data
node configuration parameter. This includes any default value
implied for this option by the setting of the
InitialLogFileGroup data
node configuration parameter.
The maximum permitted for UNDO_BUFFER_SIZE is
629145600 (600 MB).
On 32-bit systems, the maximum supported value for
INITIAL_SIZE is 4294967296 (4 GB). (Bug #29186)
The minimum allowed value for INITIAL_SIZE is
1048576 (1 MB).
The ENGINE option determines the storage engine
to be used by this log file group, with
engine_name being the name of the
storage engine. In MySQL 8.0, this must be
NDB (or
NDBCLUSTER). If
ENGINE is not set, MySQL tries to use the
engine specified by the
default_storage_engine server
system variable (formerly
storage_engine). In any case, if
the engine is not specified as NDB or
NDBCLUSTER, the CREATE
LOGFILE GROUP statement appears to succeed but actually
fails to create the log file group, as shown here:
mysql>CREATE LOGFILE GROUP lg1->ADD UNDOFILE 'undo.dat' INITIAL_SIZE = 10M;Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS;+-------+------+------------------------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------------------------------------------------------------+ | Error | 1478 | Table storage engine 'InnoDB' does not support the create option 'TABLESPACE or LOGFILE GROUP' | +-------+------+------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql>DROP LOGFILE GROUP lg1 ENGINE = NDB;ERROR 1529 (HY000): Failed to drop LOGFILE GROUP mysql>CREATE LOGFILE GROUP lg1->ADD UNDOFILE 'undo.dat' INITIAL_SIZE = 10M->ENGINE = NDB;Query OK, 0 rows affected (2.97 sec)
The fact that the CREATE LOGFILE GROUP
statement does not actually return an error when a
non-NDB storage engine is named, but rather
appears to succeed, is a known issue which we hope to address in a
future release of NDB Cluster.
REDO_BUFFER_SIZE,
NODEGROUP, WAIT, and
COMMENT are parsed but ignored, and so have no
effect in MySQL 8.0. These options are intended for
future expansion.
When used with ENGINE [=] NDB, a log file group
and associated UNDO log file are created on
each Cluster data node. You can verify that the
UNDO files were created and obtain information
about them by querying the
INFORMATION_SCHEMA.FILES table. For
example:
mysql>SELECT LOGFILE_GROUP_NAME, LOGFILE_GROUP_NUMBER, EXTRA->FROM INFORMATION_SCHEMA.FILES->WHERE FILE_NAME = 'undo_10.dat';+--------------------+----------------------+----------------+ | LOGFILE_GROUP_NAME | LOGFILE_GROUP_NUMBER | EXTRA | +--------------------+----------------------+----------------+ | lg_3 | 11 | CLUSTER_NODE=3 | | lg_3 | 11 | CLUSTER_NODE=4 | +--------------------+----------------------+----------------+ 2 rows in set (0.06 sec)
CREATE LOGFILE GROUP is useful only
with Disk Data storage for NDB Cluster. See
Section 22.5.10, “NDB Cluster Disk Data Tables”.
CREATE
[DEFINER = user]
PROCEDURE sp_name ([proc_parameter[,...]])
[characteristic ...] routine_body
CREATE
[DEFINER = user]
FUNCTION sp_name ([func_parameter[,...]])
RETURNS type
[characteristic ...] routine_body
proc_parameter:
[ IN | OUT | INOUT ] param_name type
func_parameter:
param_name type
type:
Any valid MySQL data type
characteristic: {
COMMENT 'string'
| LANGUAGE SQL
| [NOT] DETERMINISTIC
| { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
| SQL SECURITY { DEFINER | INVOKER }
}
routine_body:
Valid SQL routine statement
These statements create stored routines. By default, a routine is
associated with the default database. To associate the routine
explicitly with a given database, specify the name as
db_name.sp_name when you create it.
The CREATE FUNCTION statement is
also used in MySQL to support UDFs (user-defined functions). See
Section 29.4, “Adding Functions to MySQL”. A UDF can be regarded as an
external stored function. Stored functions share their namespace
with UDFs. See Section 9.2.5, “Function Name Parsing and Resolution”, for the
rules describing how the server interprets references to different
kinds of functions.
To invoke a stored procedure, use the
CALL statement (see
Section 13.2.1, “CALL Statement”). To invoke a stored function, refer to it
in an expression. The function returns a value during expression
evaluation.
CREATE PROCEDURE and
CREATE FUNCTION require the
CREATE ROUTINE privilege. If the
DEFINER clause is present, the privileges
required depend on the user value, as
discussed in Section 24.6, “Stored Object Access Control”. If binary
logging is enabled, CREATE FUNCTION
might require the SUPER privilege,
as discussed in Section 24.7, “Stored Program Binary Logging”.
By default, MySQL automatically grants the
ALTER ROUTINE and
EXECUTE privileges to the routine
creator. This behavior can be changed by disabling the
automatic_sp_privileges system
variable. See Section 24.2.2, “Stored Routines and MySQL Privileges”.
The DEFINER and SQL SECURITY
clauses specify the security context to be used when checking
access privileges at routine execution time, as described later in
this section.
If the routine name is the same as the name of a built-in SQL function, a syntax error occurs unless you use a space between the name and the following parenthesis when defining the routine or invoking it later. For this reason, avoid using the names of existing SQL functions for your own stored routines.
The IGNORE_SPACE SQL mode
applies to built-in functions, not to stored routines. It is
always permissible to have spaces after a stored routine name,
regardless of whether
IGNORE_SPACE is enabled.
The parameter list enclosed within parentheses must always be
present. If there are no parameters, an empty parameter list of
() should be used. Parameter names are not case
sensitive.
Each parameter is an IN parameter by default.
To specify otherwise for a parameter, use the keyword
OUT or INOUT before the
parameter name.
Specifying a parameter as IN,
OUT, or INOUT is valid
only for a PROCEDURE. For a
FUNCTION, parameters are always regarded as
IN parameters.
An IN parameter passes a value into a
procedure. The procedure might modify the value, but the
modification is not visible to the caller when the procedure
returns. An OUT parameter passes a value from
the procedure back to the caller. Its initial value is
NULL within the procedure, and its value is
visible to the caller when the procedure returns. An
INOUT parameter is initialized by the caller,
can be modified by the procedure, and any change made by the
procedure is visible to the caller when the procedure returns.
For each OUT or INOUT
parameter, pass a user-defined variable in the
CALL statement that invokes the
procedure so that you can obtain its value when the procedure
returns. If you are calling the procedure from within another
stored procedure or function, you can also pass a routine
parameter or local routine variable as an OUT
or INOUT parameter. If you are calling the
procedure from within a trigger, you can also pass
NEW. as an
col_nameOUT or INOUT parameter.
For information about the effect of unhandled conditions on procedure parameters, see Section 13.6.7.8, “Condition Handling and OUT or INOUT Parameters”.
Routine parameters cannot be referenced in statements prepared within the routine; see Section 24.8, “Restrictions on Stored Programs”.
The following example shows a simple stored procedure that, given
a country code, counts the number of cities for that country that
appear in the city table of the
world database. The country code is passed
using an IN parameter, and the city count is
returned using an OUT parameter:
mysql>delimiter //mysql>CREATE PROCEDURE citycount (IN country CHAR(3), OUT cities INT)BEGINSELECT COUNT(*) INTO cities FROM world.cityWHERE CountryCode = country;END//Query OK, 0 rows affected (0.01 sec) mysql>delimiter ;mysql>CALL citycount('JPN', @cities); -- cities in JapanQuery OK, 1 row affected (0.00 sec) mysql>SELECT @cities;+---------+ | @cities | +---------+ | 248 | +---------+ 1 row in set (0.00 sec) mysql>CALL citycount('FRA', @cities); -- cities in FranceQuery OK, 1 row affected (0.00 sec) mysql>SELECT @cities;+---------+ | @cities | +---------+ | 40 | +---------+ 1 row in set (0.00 sec)
The example uses the mysql client
delimiter command to change the statement
delimiter from ; to // while
the procedure is being defined. This enables the
; delimiter used in the procedure body to be
passed through to the server rather than being interpreted by
mysql itself. See
Section 24.1, “Defining Stored Programs”.
The RETURNS clause may be specified only for a
FUNCTION, for which it is mandatory. It
indicates the return type of the function, and the function body
must contain a RETURN
statement. If the
valueRETURN statement returns a value of
a different type, the value is coerced to the proper type. For
example, if a function specifies an
ENUM or
SET value in the
RETURNS clause, but the
RETURN statement returns an
integer, the value returned from the function is the string for
the corresponding ENUM member of
set of SET members.
The following example function takes a parameter, performs an
operation using an SQL function, and returns the result. In this
case, it is unnecessary to use delimiter
because the function definition contains no internal
; statement delimiters:
mysql>CREATE FUNCTION hello (s CHAR(20))mysql>RETURNS CHAR(50) DETERMINISTICRETURN CONCAT('Hello, ',s,'!');Query OK, 0 rows affected (0.00 sec) mysql>SELECT hello('world');+----------------+ | hello('world') | +----------------+ | Hello, world! | +----------------+ 1 row in set (0.00 sec)
Parameter types and function return types can be declared to use
any valid data type. The COLLATE attribute can
be used if preceded by a CHARACTER SET
specification.
The routine_body consists of a valid
SQL routine statement. This can be a simple statement such as
SELECT or
INSERT, or a compound statement
written using BEGIN and END.
Compound statements can contain declarations, loops, and other
control structure statements. The syntax for these statements is
described in Section 13.6, “Compound Statement Syntax”. In
practice, stored functions tend to use compound statements, unless
the body consists of a single
RETURN statement.
MySQL permits routines to contain DDL statements, such as
CREATE and DROP. MySQL also
permits stored procedures (but not stored functions) to contain
SQL transaction statements such as
COMMIT. Stored functions may not
contain statements that perform explicit or implicit commit or
rollback. Support for these statements is not required by the SQL
standard, which states that each DBMS vendor may decide whether to
permit them.
Statements that return a result set can be used within a stored
procedure but not within a stored function. This prohibition
includes SELECT statements that do
not have an INTO
clause and other
statements such as var_listSHOW,
EXPLAIN, and
CHECK TABLE. For statements that
can be determined at function definition time to return a result
set, a Not allowed to return a result set from a
function error occurs
(ER_SP_NO_RETSET). For statements
that can be determined only at runtime to return a result set, a
PROCEDURE %s can't return a result set in the given
context error occurs
(ER_SP_BADSELECT).
USE statements within stored
routines are not permitted. When a routine is invoked, an implicit
USE is
performed (and undone when the routine terminates). The causes the
routine to have the given default database while it executes.
References to objects in databases other than the routine default
database should be qualified with the appropriate database name.
db_name
For additional information about statements that are not permitted in stored routines, see Section 24.8, “Restrictions on Stored Programs”.
For information about invoking stored procedures from within programs written in a language that has a MySQL interface, see Section 13.2.1, “CALL Statement”.
MySQL stores the sql_mode system
variable setting in effect when a routine is created or altered,
and always executes the routine with this setting in force,
regardless of the current server SQL mode when the
routine begins executing.
The switch from the SQL mode of the invoker to that of the routine occurs after evaluation of arguments and assignment of the resulting values to routine parameters. If you define a routine in strict SQL mode but invoke it in nonstrict mode, assignment of arguments to routine parameters does not take place in strict mode. If you require that expressions passed to a routine be assigned in strict SQL mode, you should invoke the routine with strict mode in effect.
The COMMENT characteristic is a MySQL
extension, and may be used to describe the stored routine. This
information is displayed by the SHOW CREATE
PROCEDURE and SHOW CREATE
FUNCTION statements.
The LANGUAGE characteristic indicates the
language in which the routine is written. The server ignores this
characteristic; only SQL routines are supported.
A routine is considered “deterministic” if it always
produces the same result for the same input parameters, and
“not deterministic” otherwise. If neither
DETERMINISTIC nor NOT
DETERMINISTIC is given in the routine definition, the
default is NOT DETERMINISTIC. To declare that a
function is deterministic, you must specify
DETERMINISTIC explicitly.
Assessment of the nature of a routine is based on the
“honesty” of the creator: MySQL does not check that a
routine declared DETERMINISTIC is free of
statements that produce nondeterministic results. However,
misdeclaring a routine might affect results or affect performance.
Declaring a nondeterministic routine as
DETERMINISTIC might lead to unexpected results
by causing the optimizer to make incorrect execution plan choices.
Declaring a deterministic routine as
NONDETERMINISTIC might diminish performance by
causing available optimizations not to be used.
If binary logging is enabled, the DETERMINISTIC
characteristic affects which routine definitions MySQL accepts.
See Section 24.7, “Stored Program Binary Logging”.
A routine that contains the NOW()
function (or its synonyms) or
RAND() is nondeterministic, but it
might still be replication-safe. For
NOW(), the binary log includes the
timestamp and replicates correctly.
RAND() also replicates correctly as
long as it is called only a single time during the execution of a
routine. (You can consider the routine execution timestamp and
random number seed as implicit inputs that are identical on the
master and slave.)
Several characteristics provide information about the nature of data use by the routine. In MySQL, these characteristics are advisory only. The server does not use them to constrain what kinds of statements a routine will be permitted to execute.
CONTAINS SQL indicates that the routine
does not contain statements that read or write data. This is
the default if none of these characteristics is given
explicitly. Examples of such statements are SET @x =
1 or DO RELEASE_LOCK('abc'),
which execute but neither read nor write data.
NO SQL indicates that the routine contains
no SQL statements.
READS SQL DATA indicates that the routine
contains statements that read data (for example,
SELECT), but not statements
that write data.
MODIFIES SQL DATA indicates that the
routine contains statements that may write data (for example,
INSERT or
DELETE).
The SQL SECURITY characteristic can be
DEFINER or INVOKER to
specify the security context; that is, whether the routine
executes using the privileges of the account named in the routine
DEFINER clause or the user who invokes it. This
account must have permission to access the database with which the
routine is associated. The default value is
DEFINER. The user who invokes the routine must
have the EXECUTE privilege for it,
as must the DEFINER account if the routine
executes in definer security context.
The DEFINER clause specifies the MySQL account
to be used when checking access privileges at routine execution
time for routines that have the SQL SECURITY
DEFINER characteristic.
If the DEFINER clause is present, the
user value should be a MySQL account
specified as
',
user_name'@'host_name'CURRENT_USER, or
CURRENT_USER(). The permitted
user values depend on the privileges
you hold, as discussed in
Section 24.6, “Stored Object Access Control”. Also see that section
for additional information about stored routine security.
If the DEFINER clause is omitted, the default
definer is the user who executes the CREATE
PROCEDURE or CREATE
FUNCTION statement. This is the same as specifying
DEFINER = CURRENT_USER explicitly.
Within the body of a stored routine that is defined with the
SQL SECURITY DEFINER characteristic, the
CURRENT_USER function returns the
routine's DEFINER value. For information about
user auditing within stored routines, see
Section 6.2.22, “SQL-Based Account Activity Auditing”.
Consider the following procedure, which displays a count of the
number of MySQL accounts listed in the
mysql.user system table:
CREATE DEFINER = 'admin'@'localhost' PROCEDURE account_count() BEGIN SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; END;
The procedure is assigned a DEFINER account of
'admin'@'localhost' no matter which user
defines it. It executes with the privileges of that account no
matter which user invokes it (because the default security
characteristic is DEFINER). The procedure
succeeds or fails depending on whether invoker has the
EXECUTE privilege for it and
'admin'@'localhost' has the
SELECT privilege for the
mysql.user table.
Now suppose that the procedure is defined with the SQL
SECURITY INVOKER characteristic:
CREATE DEFINER = 'admin'@'localhost' PROCEDURE account_count() SQL SECURITY INVOKER BEGIN SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; END;
The procedure still has a DEFINER of
'admin'@'localhost', but in this case, it
executes with the privileges of the invoking user. Thus, the
procedure succeeds or fails depending on whether the invoker has
the EXECUTE privilege for it and
the SELECT privilege for the
mysql.user table.
The server handles the data type of a routine parameter, local
routine variable created with
DECLARE, or function return value
as follows:
Assignments are checked for data type mismatches and overflow. Conversion and overflow problems result in warnings, or errors in strict SQL mode.
Only scalar values can be assigned. For example, a statement
such as SET x = (SELECT 1, 2) is invalid.
For character data types, if CHARACTER SET
is includedd in the declaration, the specified character set
and its default collation is used. If the
COLLATE attribute is also present, that
collation is used rather than the default collation.
If CHARACTER SET and
COLLATE are not present, the database
character set and collation in effect at routine creation time
are used. To avoid having the server use the database
character set and collation, provide an explicit
CHARACTER SET and a
COLLATE attribute for character data
parameters.
If you alter the database default character set or collation, stored routines that are to use the new database defaults must be dropped and recreated.
The database character set and collation are given by the
value of the
character_set_database and
collation_database system
variables. For more information, see
Section 10.3.3, “Database Character Set and Collation”.
CREATE SERVERserver_nameFOREIGN DATA WRAPPERwrapper_nameOPTIONS (option[,option] ...)option: { HOSTcharacter-literal| DATABASEcharacter-literal| USERcharacter-literal| PASSWORDcharacter-literal| SOCKETcharacter-literal| OWNERcharacter-literal| PORTnumeric-literal}
This statement creates the definition of a server for use with the
FEDERATED storage engine. The CREATE
SERVER statement creates a new row in the
servers table in the mysql
database. This statement requires the
SUPER privilege.
The
should be a unique reference to the server. Server definitions are
global within the scope of the server, it is not possible to
qualify the server definition to a specific database.
server_name has a
maximum length of 64 characters (names longer than 64 characters
are silently truncated), and is case-insensitive. You may specify
the name as a quoted string.
server_name
The is
an identifier and may be quoted with single quotation marks.
wrapper_name
For each you
must specify either a character literal or numeric literal.
Character literals are UTF-8, support a maximum length of 64
characters and default to a blank (empty) string. String literals
are silently truncated to 64 characters. Numeric literals must be
a number between 0 and 9999, default value is 0.
option
The OWNER option is currently not applied,
and has no effect on the ownership or operation of the server
connection that is created.
The CREATE SERVER statement creates an entry in
the mysql.servers table that can later be used
with the CREATE TABLE statement
when creating a FEDERATED table. The options
that you specify will be used to populate the columns in the
mysql.servers table. The table columns are
Server_name, Host,
Db, Username,
Password, Port and
Socket.
For example:
CREATE SERVER s FOREIGN DATA WRAPPER mysql OPTIONS (USER 'Remote', HOST '198.51.100.106', DATABASE 'test');
Be sure to specify all options necessary to establish a connection to the server. The user name, host name, and database name are mandatory. Other options might be required as well, such as password.
The data stored in the table can be used when creating a
connection to a FEDERATED table:
CREATE TABLE t (s1 INT) ENGINE=FEDERATED CONNECTION='s';
For more information, see Section 16.8, “The FEDERATED Storage Engine”.
CREATE SERVER causes an implicit commit. See
Section 13.3.3, “Statements That Cause an Implicit Commit”.
CREATE SERVER is not written to the binary log,
regardless of the logging format that is in use.
CREATE OR REPLACE SPATIAL REFERENCE SYSTEM
srid srs_attribute ...
CREATE SPATIAL REFERENCE SYSTEM
[IF NOT EXISTS]
srid srs_attribute ...
srs_attribute: {
NAME 'srs_name'
| DEFINITION 'definition'
| ORGANIZATION 'org_name' IDENTIFIED BY org_id
| DESCRIPTION 'description'
}
srid, org_id: 32-bit unsigned integer
This statement creates a
spatial reference
system (SRS) definition and stores it in the data
dictionary. The definition can be inspected using the
INFORMATION_SCHEMA
ST_SPATIAL_REFERENCE_SYSTEMS table.
This statement requires the SUPER
privilege.
If neither OR REPLACE nor IF NOT
EXISTS is specified, an error occurs if an SRS
definition with the SRID value already exists.
With CREATE OR REPLACE syntax, any existing SRS
definition with the same SRID value is replaced, unless the SRID
value is used by some column in an existing table. In that case,
an error occurs. For example:
mysql> CREATE OR REPLACE SPATIAL REFERENCE SYSTEM 4326 ...;
ERROR 3716 (SR005): Can't modify SRID 4326. There is at
least one column depending on it.
To identify which column or columns use the SRID, use this query:
SELECT * FROM INFORMATION_SCHEMA.ST_GEOMETRY_COLUMNS WHERE SRS_ID=4326;
With CREATE ... IF NOT EXISTS syntax, any
existing SRS definition with the same SRID value causes the new
definition to be ignored and a warning occurs.
SRID values must be in the range of 32-bit unsigned integers, with these restrictions:
SRID 0 is a valid SRID but cannot be used with
CREATE SPATIAL REFERENCE
SYSTEM.
If the value is in a reserved SRID range, a warning occurs. Reserved ranges are [0, 32767] (reserved by EPSG), [60,000,000, 69,999,999] (reserved by EPSG), and [2,000,000,000, 2,147,483,647] (reserved by MySQL). EPSG stands for the European Petroleum Survey Group.
Users should not create SRSs with SRIDs in the reserved ranges. Doing so runs the risk that the SRIDs will conflict with future SRS definitions distributed with MySQL, with the result that the new system-provided SRSs are not installed for MySQL upgrades or that the user-defined SRSs are overwritten.
Attributes for the statement must satisfy these conditions:
Attributes can be given in any order, but no attribute can be given more than once.
The NAME and DEFINITION
attributes are mandatory.
The NAME
srs_name attribute value must be
unique. The combination of the ORGANIZATION
org_name and
org_id attribute values must be
unique.
The NAME
srs_name attribute value and
ORGANIZATION
org_name attribute value cannot be
empty or begin or end with whitespace.
String values in attribute specifications cannot contain control characters, including newline.
The following table shows the maximum lengths for string attribute values.
Table 13.6 CREATE SPATIAL REFERENCE SYSTEM Attribute Lengths
| Attribute | Maximum Length (characters) |
|---|---|
NAME |
80 |
DEFINITION |
4096 |
ORGANIZATION |
256 |
DESCRIPTION |
2048 |
Here is an example CREATE SPATIAL REFERENCE
SYSTEM statement. The DEFINITION
value is reformatted across multiple lines for readability. (For
the statement to be legal, the value actually must be given on a
single line.)
CREATE SPATIAL REFERENCE SYSTEM 4120 NAME 'Greek' ORGANIZATION 'EPSG' IDENTIFIED BY 4120 DEFINITION 'GEOGCS["Greek",DATUM["Greek",SPHEROID["Bessel 1841", 6377397.155,299.1528128,AUTHORITY["EPSG","7004"]], AUTHORITY["EPSG","6120"]],PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]],UNIT["degree",0.017453292519943278, AUTHORITY["EPSG","9122"]],AXIS["Lat",NORTH],AXIS["Lon",EAST], AUTHORITY["EPSG","4120"]]';
The grammar for SRS definitions is based on the grammar defined in OpenGIS Implementation Specification: Coordinate Transformation Services, Revision 1.00, OGC 01-009, January 12, 2001, Section 7.2. This specification is available at http://www.opengeospatial.org/standards/ct.
MySQL incorporates these changes to the specification:
Only the <horz cs> production rule is
implemented (that is, geographic and projected SRSs).
There is an optional, nonstandard
<authority> clause for
<parameter>. This makes it possible
to recognize projection parameters by authority instead of
name.
The specification does not make AXIS
clauses mandatory in GEOGCS spatial
reference system definitions. However, if there are no
AXIS clauses, MySQL cannot determine
whether a definition has axes in latitude-longitude order or
longitude-latitude order. MySQL enforces the nonstandard
requirement that each GEOGCS definition
must include two AXIS clauses. One must be
NORTH or SOUTH, and the
other EAST or WEST. The
AXIS clause order determines whether the
definition has axes in latitude-longitude order or
longitude-latitude order.
SRS definitions may not contain newlines.
If an SRS definition specifies an authority code for the projection (which is recommended), an error occurs if the definition is missing mandatory parameters. In this case, the error message indicates what the problem is. The projection methods and mandatory parameters that MySQL supports are shown in Table 13.7, “Supported Spatial Reference System Projection Methods” and Table 13.8, “Spatial Reference System Projection Parameters”.
For additional information about writing SRS definitions for MySQL, see Geographic Spatial Reference Systems in MySQL 8.0 and Projected Spatial Reference Systems in MySQL 8.0
The following table shows the projection methods that MySQL supports. MySQL permits unknown projection methods but cannot check the defintion for mandatory paramters and cannot convert spatial data to or from an unknown projection. For detailed explanations of how each projection works, including formulas, see EPSG Guidance Note 7-2.
Table 13.7 Supported Spatial Reference System Projection Methods
| EPSG Code | Projection Name | Mandatory Parameters (EPSG Codes) |
|---|---|---|
| 1024 | Popular Visualisation Pseudo Mercator | 8801, 8802, 8806, 8807 |
| 1027 | Lambert Azimuthal Equal Area (Spherical) | 8801, 8802, 8806, 8807 |
| 1028 | Equidistant Cylindrical | 8823, 8802, 8806, 8807 |
| 1029 | Equidistant Cylindrical (Spherical) | 8823, 8802, 8806, 8807 |
| 1041 | Krovak (North Orientated) | 8811, 8833, 1036, 8818, 8819, 8806, 8807 |
| 1042 | Krovak Modified | 8811, 8833, 1036, 8818, 8819, 8806, 8807, 8617, 8618, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035 |
| 1043 | Krovak Modified (North Orientated) | 8811, 8833, 1036, 8818, 8819, 8806, 8807, 8617, 8618, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035 |
| 1051 | Lambert Conic Conformal (2SP Michigan) | 8821, 8822, 8823, 8824, 8826, 8827, 1038 |
| 1052 | Colombia Urban | 8801, 8802, 8806, 8807, 1039 |
| 9801 | Lambert Conic Conformal (1SP) | 8801, 8802, 8805, 8806, 8807 |
| 9802 | Lambert Conic Conformal (2SP) | 8821, 8822, 8823, 8824, 8826, 8827 |
| 9803 | Lambert Conic Conformal (2SP Belgium) | 8821, 8822, 8823, 8824, 8826, 8827 |
| 9804 | Mercator (variant A) | 8801, 8802, 8805, 8806, 8807 |
| 9805 | Mercator (variant B) | 8823, 8802, 8806, 8807 |
| 9806 | Cassini-Soldner | 8801, 8802, 8806, 8807 |
| 9807 | Transverse Mercator | 8801, 8802, 8805, 8806, 8807 |
| 9808 | Transverse Mercator (South Orientated) | 8801, 8802, 8805, 8806, 8807 |
| 9809 | Oblique Stereographic | 8801, 8802, 8805, 8806, 8807 |
| 9810 | Polar Stereographic (variant A) | 8801, 8802, 8805, 8806, 8807 |
| 9811 | New Zealand Map Grid | 8801, 8802, 8806, 8807 |
| 9812 | Hotine Oblique Mercator (variant A) | 8811, 8812, 8813, 8814, 8815, 8806, 8807 |
| 9813 | Laborde Oblique Mercator | 8811, 8812, 8813, 8815, 8806, 8807 |
| 9815 | Hotine Oblique Mercator (variant B) | 8811, 8812, 8813, 8814, 8815, 8816, 8817 |
| 9816 | Tunisia Mining Grid | 8821, 8822, 8826, 8827 |
| 9817 | Lambert Conic Near-Conformal | 8801, 8802, 8805, 8806, 8807 |
| 9818 | American Polyconic | 8801, 8802, 8806, 8807 |
| 9819 | Krovak | 8811, 8833, 1036, 8818, 8819, 8806, 8807 |
| 9820 | Lambert Azimuthal Equal Area | 8801, 8802, 8806, 8807 |
| 9822 | Albers Equal Area | 8821, 8822, 8823, 8824, 8826, 8827 |
| 9824 | Transverse Mercator Zoned Grid System | 8801, 8830, 8831, 8805, 8806, 8807 |
| 9826 | Lambert Conic Conformal (West Orientated) | 8801, 8802, 8805, 8806, 8807 |
| 9828 | Bonne (South Orientated) | 8801, 8802, 8806, 8807 |
| 9829 | Polar Stereographic (variant B) | 8832, 8833, 8806, 8807 |
| 9830 | Polar Stereographic (variant C) | 8832, 8833, 8826, 8827 |
| 9831 | Guam Projection | 8801, 8802, 8806, 8807 |
| 9832 | Modified Azimuthal Equidistant | 8801, 8802, 8806, 8807 |
| 9833 | Hyperbolic Cassini-Soldner | 8801, 8802, 8806, 8807 |
| 9834 | Lambert Cylindrical Equal Area (Spherical) | 8823, 8802, 8806, 8807 |
| 9835 | Lambert Cylindrical Equal Area | 8823, 8802, 8806, 8807 |
The following table shows the projection parameters that MySQL recognizes. Recognition occurs primarily by authority code. If there is no authority code, MySQL falls back to case-insensitive string matching on the parameter name. For details about each parameter, look it up by code in the EPSG Online Registry.
Table 13.8 Spatial Reference System Projection Parameters
| EPSG Code | Fallback Name (Recognized by MySQL) | EPSG Name |
|---|---|---|
| 1026 | c1 | C1 |
| 1027 | c2 | C2 |
| 1028 | c3 | C3 |
| 1029 | c4 | C4 |
| 1030 | c5 | C5 |
| 1031 | c6 | C6 |
| 1032 | c7 | C7 |
| 1033 | c8 | C8 |
| 1034 | c9 | C9 |
| 1035 | c10 | C10 |
| 1036 | azimuth | Co-latitude of cone axis |
| 1038 | ellipsoid_scale_factor | Ellipsoid scaling factor |
| 1039 | projection_plane_height_at_origin | Projection plane origin height |
| 8617 | evaluation_point_ordinate_1 | Ordinate 1 of evaluation point |
| 8618 | evaluation_point_ordinate_2 | Ordinate 2 of evaluation point |
| 8801 | latitude_of_origin | Latitude of natural origin |
| 8802 | central_meridian | Longitude of natural origin |
| 8805 | scale_factor | Scale factor at natural origin |
| 8806 | false_easting | False easting |
| 8807 | false_northing | False northing |
| 8811 | latitude_of_center | Latitude of projection centre |
| 8812 | longitude_of_center | Longitude of projection centre |
| 8813 | azimuth | Azimuth of initial line |
| 8814 | rectified_grid_angle | Angle from Rectified to Skew Grid |
| 8815 | scale_factor | Scale factor on initial line |
| 8816 | false_easting | Easting at projection centre |
| 8817 | false_northing | Northing at projection centre |
| 8818 | pseudo_standard_parallel_1 | Latitude of pseudo standard parallel |
| 8819 | scale_factor | Scale factor on pseudo standard parallel |
| 8821 | latitude_of_origin | Latitude of false origin |
| 8822 | central_meridian | Longitude of false origin |
| 8823 | standard_parallel_1, standard_parallel1 | Latitude of 1st standard parallel |
| 8824 | standard_parallel_2, standard_parallel2 | Latitude of 2nd standard parallel |
| 8826 | false_easting | Easting at false origin |
| 8827 | false_northing | Northing at false origin |
| 8830 | initial_longitude | Initial longitude |
| 8831 | zone_width | Zone width |
| 8832 | standard_parallel | Latitude of standard parallel |
| 8833 | longitude_of_center | Longitude of origin |
CREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name(create_definition,...) [table_options] [partition_options] CREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name[(create_definition,...)] [table_options] [partition_options] [IGNORE | REPLACE] [AS]query_expressionCREATE [TEMPORARY] TABLE [IF NOT EXISTS]tbl_name{ LIKEold_tbl_name| (LIKEold_tbl_name) }create_definition: {col_namecolumn_definition| {INDEX | KEY} [index_name] [index_type] (key_part,...) [index_option] ... | {FULLTEXT | SPATIAL} [INDEX | KEY] [index_name] (key_part,...) [index_option] ... | [CONSTRAINT [symbol]] PRIMARY KEY [index_type] (key_part,...) [index_option] ... | [CONSTRAINT [symbol]] UNIQUE [INDEX | KEY] [index_name] [index_type] (key_part,...) [index_option] ... | [CONSTRAINT [symbol]] FOREIGN KEY [index_name] (col_name,...)reference_definition|check_constraint_definition}column_definition: {data_type[NOT NULL | NULL] [DEFAULT {literal| (expr)} ] [AUTO_INCREMENT] [UNIQUE [KEY]] [[PRIMARY] KEY] [COMMENT 'string'] [COLLATEcollation_name] [COLUMN_FORMAT {FIXED | DYNAMIC | DEFAULT}] [STORAGE {DISK | MEMORY}] [reference_definition] [check_constraint_definition] |data_type[COLLATEcollation_name] [GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED] [NOT NULL | NULL] [UNIQUE [KEY]] [[PRIMARY] KEY] [COMMENT 'string'] [reference_definition] [check_constraint_definition] }data_type: (see Chapter 11, Data Types)key_part: {col_name[(length)] | (expr)} [ASC | DESC]index_type: USING {BTREE | HASH}index_option: { KEY_BLOCK_SIZE [=]value|index_type| WITH PARSERparser_name| COMMENT 'string' | {VISIBLE | INVISIBLE} }check_constraint_definition: [CONSTRAINT [symbol]] CHECK (expr) [[NOT] ENFORCED]reference_definition: REFERENCEStbl_name(key_part,...) [MATCH FULL | MATCH PARTIAL | MATCH SIMPLE] [ON DELETEreference_option] [ON UPDATEreference_option]reference_option: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULTtable_options:table_option[[,]table_option] ...table_option: { AUTO_INCREMENT [=]value| AVG_ROW_LENGTH [=]value| [DEFAULT] CHARACTER SET [=]charset_name| CHECKSUM [=] {0 | 1} | [DEFAULT] COLLATE [=]collation_name| COMMENT [=] 'string' | COMPRESSION [=] {'ZLIB' | 'LZ4' | 'NONE'} | CONNECTION [=] 'connect_string' | {DATA | INDEX} DIRECTORY [=] 'absolute path to directory' | DELAY_KEY_WRITE [=] {0 | 1} | ENCRYPTION [=] {'Y' | 'N'} | ENGINE [=]engine_name| INSERT_METHOD [=] { NO | FIRST | LAST } | KEY_BLOCK_SIZE [=]value| MAX_ROWS [=]value| MIN_ROWS [=]value| PACK_KEYS [=] {0 | 1 | DEFAULT} | PASSWORD [=] 'string' | ROW_FORMAT [=] {DEFAULT | DYNAMIC | FIXED | COMPRESSED | REDUNDANT | COMPACT} | STATS_AUTO_RECALC [=] {DEFAULT | 0 | 1} | STATS_PERSISTENT [=] {DEFAULT | 0 | 1} | STATS_SAMPLE_PAGES [=]value| TABLESPACEtablespace_name[STORAGE {DISK | MEMORY}] | UNION [=] (tbl_name[,tbl_name]...) }partition_options: PARTITION BY { [LINEAR] HASH(expr) | [LINEAR] KEY [ALGORITHM={1 | 2}] (column_list) | RANGE{(expr) | COLUMNS(column_list)} | LIST{(expr) | COLUMNS(column_list)} } [PARTITIONSnum] [SUBPARTITION BY { [LINEAR] HASH(expr) | [LINEAR] KEY [ALGORITHM={1 | 2}] (column_list) } [SUBPARTITIONSnum] ] [(partition_definition[,partition_definition] ...)]partition_definition: PARTITIONpartition_name[VALUES {LESS THAN {(expr|value_list) | MAXVALUE} | IN (value_list)}] [[STORAGE] ENGINE [=]engine_name] [COMMENT [=] 'string' ] [DATA DIRECTORY [=] ''] [INDEX DIRECTORY [=] 'data_dir'] [MAX_ROWS [=]index_dirmax_number_of_rows] [MIN_ROWS [=]min_number_of_rows] [TABLESPACE [=] tablespace_name] [(subpartition_definition[,subpartition_definition] ...)]subpartition_definition: SUBPARTITIONlogical_name[[STORAGE] ENGINE [=]engine_name] [COMMENT [=] 'string' ] [DATA DIRECTORY [=] ''] [INDEX DIRECTORY [=] 'data_dir'] [MAX_ROWS [=]index_dirmax_number_of_rows] [MIN_ROWS [=]min_number_of_rows] [TABLESPACE [=] tablespace_name]query_expression:SELECT ... (Some valid select or union statement)
CREATE TABLE creates a table with
the given name. You must have the
CREATE privilege for the table.
By default, tables are created in the default database, using the
InnoDB storage engine. An error
occurs if the table exists, if there is no default database, or if
the database does not exist.
MySQL has no limit on the number of tables. The underlying file
system may have a limit on the number of files that represent
tables. Individual storage engines may impose engine-specific
constraints. InnoDB permits up to 4 billion
tables.
For information about the physical representation of a table, see Section 13.1.20.1, “Files Created by CREATE TABLE”.
There are several aspects to the CREATE
TABLE statement, described under the following topics in
this section:
tbl_name
The table name can be specified as
db_name.tbl_name to create the
table in a specific database. This works regardless of whether
there is a default database, assuming that the database
exists. If you use quoted identifiers, quote the database and
table names separately. For example, write
`mydb`.`mytbl`, not
`mydb.mytbl`.
Rules for permissible table names are given in Section 9.2, “Schema Object Names”.
IF NOT EXISTS
Prevents an error from occurring if the table exists. However,
there is no verification that the existing table has a
structure identical to that indicated by the
CREATE TABLE statement.
You can use the TEMPORARY keyword when creating
a table. A TEMPORARY table is visible only
within the current session, and is dropped automatically when the
session is closed. For more information, see
Section 13.1.20.2, “CREATE TEMPORARY TABLE Statement”.
LIKE
Use CREATE TABLE ... LIKE to create an
empty table based on the definition of another table,
including any column attributes and indexes defined in the
original table:
CREATE TABLEnew_tblLIKEorig_tbl;
For more information, see Section 13.1.20.3, “CREATE TABLE ... LIKE Statement”.
[AS]
query_expression
To create one table from another, add a
SELECT statement at the end of
the CREATE TABLE statement:
CREATE TABLEnew_tblAS SELECT * FROMorig_tbl;
For more information, see Section 13.1.20.4, “CREATE TABLE ... SELECT Statement”.
IGNORE | REPLACE
The IGNORE and REPLACE
options indicate how to handle rows that duplicate unique key
values when copying a table using a
SELECT statement.
For more information, see Section 13.1.20.4, “CREATE TABLE ... SELECT Statement”.
There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table and depends on the factors discussed in Section 8.4.7, “Limits on Table Column Count and Row Size”.
data_type
data_type represents the data type
in a column definition. For a full description of the syntax
available for specifying column data types, as well as
information about the properties of each type, see
Chapter 11, Data Types.
Some attributes do not apply to all data types.
AUTO_INCREMENT applies only to integer
and floating-point types. Prior to MySQL 8.0.13,
DEFAULT does not apply to the
BLOB,
TEXT,
GEOMETRY, and
JSON types.
Character data types (CHAR,
VARCHAR, the
TEXT types,
ENUM,
SET, and any synonyms) can
include CHARACTER SET to specify the
character set for the column. CHARSET
is a synonym for CHARACTER SET. A
collation for the character set can be specified with the
COLLATE attribute, along with any other
attributes. For details, see Chapter 10, Character Sets, Collations, Unicode.
Example:
CREATE TABLE t (c CHAR(20) CHARACTER SET utf8 COLLATE utf8_bin);
MySQL 8.0 interprets length specifications in
character column definitions in characters. Lengths for
BINARY and
VARBINARY are in bytes.
For CHAR,
VARCHAR,
BINARY, and
VARBINARY columns, indexes
can be created that use only the leading part of column
values, using
syntax to specify an index prefix length.
col_name(length)BLOB and
TEXT columns also can be
indexed, but a prefix length must be
given. Prefix lengths are given in characters for
nonbinary string types and in bytes for binary string
types. That is, index entries consist of the first
length characters of each
column value for CHAR,
VARCHAR, and
TEXT columns, and the first
length bytes of each column
value for BINARY,
VARBINARY, and
BLOB columns. Indexing only
a prefix of column values like this can make the index
file much smaller. For additional information about index
prefixes, see Section 13.1.15, “CREATE INDEX Statement”.
Only the InnoDB and
MyISAM storage engines support indexing
on BLOB and
TEXT columns. For example:
CREATE TABLE test (blob_col BLOB, INDEX(blob_col(10)));
If a specified index prefix exceeds the maximum column
data type size, CREATE
TABLE handles the index as follows:
For a nonunique index, either an error occurs (if strict SQL mode is enabled), or the index length is reduced to lie within the maximum column data type size and a warning is produced (if strict SQL mode is not enabled).
For a unique index, an error occurs regardless of SQL mode because reducing the index length might enable insertion of nonunique entries that do not meet the specified uniqueness requirement.
JSON columns cannot be
indexed. You can work around this restriction by creating
an index on a generated column that extracts a scalar
value from the JSON column. See
Indexing a Generated Column to Provide a JSON Column Index, for a
detailed example.
NOT NULL | NULL
If neither NULL nor NOT
NULL is specified, the column is treated as though
NULL had been specified.
In MySQL 8.0, only the InnoDB,
MyISAM, and MEMORY
storage engines support indexes on columns that can have
NULL values. In other cases, you must
declare indexed columns as NOT NULL or an
error results.
DEFAULT
Specifies a default value for a column. For more information
about default value handling, including the case that a column
definition includes no explicit DEFAULT
value, see Section 11.6, “Data Type Default Values”.
If the NO_ZERO_DATE or
NO_ZERO_IN_DATE SQL mode is
enabled and a date-valued default is not correct according to
that mode, CREATE TABLE
produces a warning if strict SQL mode is not enabled and an
error if strict mode is enabled. For example, with
NO_ZERO_IN_DATE enabled,
c1 DATE DEFAULT '2010-00-00' produces a
warning.
AUTO_INCREMENT
An integer or floating-point column can have the additional
attribute AUTO_INCREMENT. When you insert a
value of NULL (recommended) or
0 into an indexed
AUTO_INCREMENT column, the column is set to
the next sequence value. Typically this is
, where
value+1value is the largest value for the
column currently in the table.
AUTO_INCREMENT sequences begin with
1.
To retrieve an AUTO_INCREMENT value after
inserting a row, use the
LAST_INSERT_ID() SQL function
or the mysql_insert_id() C API
function. See Section 12.15, “Information Functions”, and
Section 28.7.6.38, “mysql_insert_id()”.
If the NO_AUTO_VALUE_ON_ZERO
SQL mode is enabled, you can store 0 in
AUTO_INCREMENT columns as
0 without generating a new sequence value.
See Section 5.1.11, “Server SQL Modes”.
There can be only one AUTO_INCREMENT column
per table, it must be indexed, and it cannot have a
DEFAULT value. An
AUTO_INCREMENT column works properly only
if it contains only positive values. Inserting a negative
number is regarded as inserting a very large positive number.
This is done to avoid precision problems when numbers
“wrap” over from positive to negative and also to
ensure that you do not accidentally get an
AUTO_INCREMENT column that contains
0.
For MyISAM tables, you can specify an
AUTO_INCREMENT secondary column in a
multiple-column key. See
Section 3.6.9, “Using AUTO_INCREMENT”.
To make MySQL compatible with some ODBC applications, you can
find the AUTO_INCREMENT value for the last
inserted row with the following query:
SELECT * FROMtbl_nameWHEREauto_colIS NULL
This method requires that
sql_auto_is_null variable is
not set to 0. See Section 5.1.8, “Server System Variables”.
For information about InnoDB and
AUTO_INCREMENT, see
Section 15.6.1.6, “AUTO_INCREMENT Handling in InnoDB”. For
information about AUTO_INCREMENT and MySQL
Replication, see
Section 17.5.1.1, “Replication and AUTO_INCREMENT”.
COMMENT
A comment for a column can be specified with the
COMMENT option, up to 1024 characters long.
The comment is displayed by the SHOW
CREATE TABLE and
SHOW FULL
COLUMNS statements.
COLUMN_FORMAT
In NDB Cluster, it is also possible to specify a data storage
format for individual columns of
NDB tables using
COLUMN_FORMAT. Permissible column formats
are FIXED, DYNAMIC, and
DEFAULT. FIXED is used
to specify fixed-width storage, DYNAMIC
permits the column to be variable-width, and
DEFAULT causes the column to use
fixed-width or variable-width storage as determined by the
column's data type (possibly overridden by a
ROW_FORMAT specifier).
For NDB tables, the default value
for COLUMN_FORMAT is
FIXED.
In NDB Cluster, the maximum possible offset for a column
defined with COLUMN_FORMAT=FIXED is 8188
bytes. For more information and possible workarounds, see
Section 22.1.7.5, “Limits Associated with Database Objects in NDB Cluster”.
COLUMN_FORMAT currently has no effect on
columns of tables using storage engines other than
NDB. MySQL 8.0
silently ignores COLUMN_FORMAT.
STORAGE
For NDB tables, it is possible to
specify whether the column is stored on disk or in memory by
using a STORAGE clause. STORAGE
DISK causes the column to be stored on disk, and
STORAGE MEMORY causes in-memory storage to
be used. The CREATE TABLE
statement used must still include a
TABLESPACE clause:
mysql>CREATE TABLE t1 (->c1 INT STORAGE DISK,->c2 INT STORAGE MEMORY->) ENGINE NDB;ERROR 1005 (HY000): Can't create table 'c.t1' (errno: 140) mysql>CREATE TABLE t1 (->c1 INT STORAGE DISK,->c2 INT STORAGE MEMORY->) TABLESPACE ts_1 ENGINE NDB;Query OK, 0 rows affected (1.06 sec)
For NDB tables, STORAGE
DEFAULT is equivalent to STORAGE
MEMORY.
The STORAGE clause has no effect on tables
using storage engines other than
NDB. The
STORAGE keyword is supported only in the
build of mysqld that is supplied with NDB
Cluster; it is not recognized in any other version of MySQL,
where any attempt to use the STORAGE
keyword causes a syntax error.
GENERATED ALWAYS
Used to specify a generated column expression. For information about generated columns, see Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
Stored generated
columns can be indexed. InnoDB
supports secondary indexes on
virtual
generated columns. See
Section 13.1.20.9, “Secondary Indexes and Generated Columns”.
Several keywords apply to creation of indexes, foreign keys, and
CHECK constraints. For general background in
addition to the following descriptions, see
Section 13.1.15, “CREATE INDEX Statement”,
Section 13.1.20.5, “FOREIGN KEY Constraints”, and
Section 13.1.20.6, “CHECK Constraints”.
CONSTRAINT
symbol
The CONSTRAINT
clause may be
given to name a constraint. If the clause is not given, or a
symbolsymbol is not included following
the CONSTRAINT keyword, MySQL automatically
generates a constraint name, with the exception noted below.
The symbol value, if used, must be
unique per schema (database), per constraint type. A duplicate
symbol results in an error. See
also the discussion about length limits of generated
constraint identifiers at Section 9.2.1, “Identifier Length Limits”.
If the CONSTRAINT
clause is not
given in a foreign key definition, or a
symbolsymbol is not included following
the CONSTRAINT keyword, MySQL uses the
foreign key index name up to MySQL 8.0.15, and automatically
generates a constraint name thereafter.
The SQL standard specifies that all types of constraints (primary key, unique index, foreign key, check) belong to the same namespace. In MySQL, each constraint type has its own namespace per schema. Consequently, names for each type of constraint must be unique per schema, but constraints of different types can have the same name.
PRIMARY KEY
A unique index where all key columns must be defined as
NOT NULL. If they are not explicitly
declared as NOT NULL, MySQL declares them
so implicitly (and silently). A table can have only one
PRIMARY KEY. The name of a PRIMARY
KEY is always PRIMARY, which thus
cannot be used as the name for any other kind of index.
If you do not have a PRIMARY KEY and an
application asks for the PRIMARY KEY in
your tables, MySQL returns the first UNIQUE
index that has no NULL columns as the
PRIMARY KEY.
In InnoDB tables, keep the PRIMARY
KEY short to minimize storage overhead for secondary
indexes. Each secondary index entry contains a copy of the
primary key columns for the corresponding row. (See
Section 15.6.2.1, “Clustered and Secondary Indexes”.)
In the created table, a PRIMARY KEY is
placed first, followed by all UNIQUE
indexes, and then the nonunique indexes. This helps the MySQL
optimizer to prioritize which index to use and also more
quickly to detect duplicated UNIQUE keys.
A PRIMARY KEY can be a multiple-column
index. However, you cannot create a multiple-column index
using the PRIMARY KEY key attribute in a
column specification. Doing so only marks that single column
as primary. You must use a separate PRIMARY
KEY(
clause.
key_part, ...)
If a table has a PRIMARY KEY or
UNIQUE NOT NULL index that consists of a
single column that has an integer type, you can use
_rowid to refer to the indexed column in
SELECT statements, as described
in Unique Indexes.
In MySQL, the name of a PRIMARY KEY is
PRIMARY. For other indexes, if you do not
assign a name, the index is assigned the same name as the
first indexed column, with an optional suffix
(_2, _3,
...) to make it unique. You can see index
names for a table using SHOW INDEX FROM
. See
Section 13.7.7.22, “SHOW INDEX Statement”.
tbl_name
KEY | INDEX
KEY is normally a synonym for
INDEX. The key attribute PRIMARY
KEY can also be specified as just
KEY when given in a column definition. This
was implemented for compatibility with other database systems.
UNIQUE
A UNIQUE index creates a constraint such
that all values in the index must be distinct. An error occurs
if you try to add a new row with a key value that matches an
existing row. For all engines, a UNIQUE
index permits multiple NULL values for
columns that can contain NULL. If you
specify a prefix value for a column in a
UNIQUE index, the column values must be
unique within the prefix length.
If a table has a PRIMARY KEY or
UNIQUE NOT NULL index that consists of a
single column that has an integer type, you can use
_rowid to refer to the indexed column in
SELECT statements, as described
in Unique Indexes.
FULLTEXT
A FULLTEXT index is a special type of index
used for full-text searches. Only the
InnoDB and
MyISAM storage engines support
FULLTEXT indexes. They can be created only
from CHAR,
VARCHAR, and
TEXT columns. Indexing always
happens over the entire column; column prefix indexing is not
supported and any prefix length is ignored if specified. See
Section 12.9, “Full-Text Search Functions”, for details of operation. A
WITH PARSER clause can be specified as an
index_option value to associate a
parser plugin with the index if full-text indexing and
searching operations need special handling. This clause is
valid only for FULLTEXT indexes.
InnoDB and
MyISAM support full-text parser
plugins. See Full-Text Parser Plugins and
Section 29.2.4.4, “Writing Full-Text Parser Plugins” for more
information.
SPATIAL
You can create SPATIAL indexes on spatial
data types. Spatial types are supported only for
InnoDB and MyISAM
tables, and indexed columns must be declared as NOT
NULL. See Section 11.4, “Spatial Data Types”.
FOREIGN KEY
MySQL supports foreign keys, which let you cross-reference
related data across tables, and foreign key constraints, which
help keep this spread-out data consistent. For definition and
option information, see
reference_definition,
and
reference_option.
Partitioned tables employing the
InnoDB storage engine do not
support foreign keys. See
Section 23.6, “Restrictions and Limitations on Partitioning”, for more
information.
CHECK
The CHECK clause enables the creation of
constraints to be checked for data values in table rows. See
Section 13.1.20.6, “CHECK Constraints”.
key_part
A key_part specification can
end with ASC or DESC
to specify whether index values are stored in ascending or
descending order. The default is ascending if no order
specifier is given.
Prefixes, defined by the length
attribute, can be up to 767 bytes long for
InnoDB tables that use the
REDUNDANT
or
COMPACT
row format. The prefix length limit is 3072 bytes for
InnoDB tables that use the
DYNAMIC
or
COMPRESSED
row format. For MyISAM tables, the
prefix length limit is 1000 bytes.
Prefix limits are measured in bytes.
However, prefix lengths for index
specifications in CREATE
TABLE, ALTER
TABLE, and CREATE
INDEX statements are interpreted as number of
characters for nonbinary string types
(CHAR,
VARCHAR,
TEXT) and number of bytes
for binary string types
(BINARY,
VARBINARY,
BLOB). Take this into
account when specifying a prefix length for a nonbinary
string column that uses a multibyte character set.
Beginning with MySQL 8.0.17, the
expr for a
key_part specification can take
the form (CAST
to create
a multi-valued index on a
json_path AS
type ARRAY)JSON column.
Multi-Valued Indexes, provides
detailed information regarding creation of, usage of, and
restrictions and limitations on multi-valued indexes.
index_type
Some storage engines permit you to specify an index type when
creating an index. The syntax for the
index_type specifier is
USING .
type_name
Example:
CREATE TABLE lookup (id INT, INDEX USING BTREE (id)) ENGINE = MEMORY;
The preferred position for USING is after
the index column list. It can be given before the column list,
but support for use of the option in that position is
deprecated and will be removed in a future MySQL release.
index_option
index_option values specify
additional options for an index.
KEY_BLOCK_SIZE
For MyISAM tables,
KEY_BLOCK_SIZE optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE value
specified for an individual index definition overrides the
table-level KEY_BLOCK_SIZE value.
For information about the table-level
KEY_BLOCK_SIZE attribute, see
Table Options.
WITH PARSER
The WITH PARSER option can only be used
with FULLTEXT indexes. It associates a
parser plugin with the index if full-text indexing and
searching operations need special handling.
InnoDB and
MyISAM support full-text
parser plugins. If you have a
MyISAM table with an
associated full-text parser plugin, you can convert the
table to InnoDB using ALTER
TABLE.
COMMENT
In MySQL 8.0, index definitions can include an optional comment of up to 1024 characters.
You can set the InnoDB
MERGE_THRESHOLD value for an individual
index using the
index_optionCOMMENT clause. See
Section 15.8.11, “Configuring the Merge Threshold for Index Pages”.
For more information about permissible
index_option values, see
Section 13.1.15, “CREATE INDEX Statement”. For more information about
indexes, see Section 8.3.1, “How MySQL Uses Indexes”.
For reference_definition syntax
details and examples, see
Section 13.1.20.5, “FOREIGN KEY Constraints”.
InnoDB and
NDB tables support checking of
foreign key constraints. The columns of the referenced table
must always be explicitly named. Both ON
DELETE and ON UPDATE actions on
foreign keys are supported. For more detailed information and
examples, see Section 13.1.20.5, “FOREIGN KEY Constraints”.
For other storage engines, MySQL Server parses and ignores the
FOREIGN KEY and
REFERENCES syntax in
CREATE TABLE statements. See
Section 1.8.2.3, “FOREIGN KEY Constraint Differences”.
For users familiar with the ANSI/ISO SQL Standard, please
note that no storage engine, including
InnoDB, recognizes or enforces the
MATCH clause used in referential
integrity constraint definitions. Use of an explicit
MATCH clause will not have the specified
effect, and also causes ON DELETE and
ON UPDATE clauses to be ignored. For
these reasons, specifying MATCH should be
avoided.
The MATCH clause in the SQL standard
controls how NULL values in a composite
(multiple-column) foreign key are handled when comparing to
a primary key. InnoDB essentially
implements the semantics defined by MATCH
SIMPLE, which permit a foreign key to be all or
partially NULL. In that case, the (child
table) row containing such a foreign key is permitted to be
inserted, and does not match any row in the referenced
(parent) table. It is possible to implement other semantics
using triggers.
Additionally, MySQL requires that the referenced columns be
indexed for performance. However, InnoDB
does not enforce any requirement that the referenced columns
be declared UNIQUE or NOT
NULL. The handling of foreign key references to
nonunique keys or keys that contain NULL
values is not well defined for operations such as
UPDATE or DELETE
CASCADE. You are advised to use foreign keys that
reference only keys that are both UNIQUE
(or PRIMARY) and NOT
NULL.
MySQL parses but ignores “inline
REFERENCES specifications” (as
defined in the SQL standard) where the references are
defined as part of the column specification. MySQL accepts
REFERENCES clauses only when specified as
part of a separate FOREIGN KEY
specification.
For information about the RESTRICT,
CASCADE, SET NULL,
NO ACTION, and SET
DEFAULT options, see
Section 13.1.20.5, “FOREIGN KEY Constraints”.
Table options are used to optimize the behavior of the table. In
most cases, you do not have to specify any of them. These options
apply to all storage engines unless otherwise indicated. Options
that do not apply to a given storage engine may be accepted and
remembered as part of the table definition. Such options then
apply if you later use ALTER TABLE
to convert the table to use a different storage engine.
ENGINE
Specifies the storage engine for the table, using one of the
names shown in the following table. The engine name can be
unquoted or quoted. The quoted name
'DEFAULT' is recognized but ignored.
| Storage Engine | Description |
|---|---|
InnoDB |
Transaction-safe tables with row locking and foreign keys. The default
storage engine for new tables. See
Chapter 15, The InnoDB Storage Engine, and in
particular Section 15.1, “Introduction to InnoDB” if you
have MySQL experience but are new to
InnoDB. |
MyISAM |
The binary portable storage engine that is primarily used for read-only or read-mostly workloads. See Section 16.2, “The MyISAM Storage Engine”. |
MEMORY |
The data for this storage engine is stored only in memory. See Section 16.3, “The MEMORY Storage Engine”. |
CSV |
Tables that store rows in comma-separated values format. See Section 16.4, “The CSV Storage Engine”. |
ARCHIVE |
The archiving storage engine. See Section 16.5, “The ARCHIVE Storage Engine”. |
EXAMPLE |
An example engine. See Section 16.9, “The EXAMPLE Storage Engine”. |
FEDERATED |
Storage engine that accesses remote tables. See Section 16.8, “The FEDERATED Storage Engine”. |
HEAP |
This is a synonym for MEMORY. |
MERGE |
A collection of MyISAM tables used as one table. Also
known as MRG_MyISAM. See
Section 16.7, “The MERGE Storage Engine”. |
NDB |
Clustered, fault-tolerant, memory-based tables, supporting transactions
and foreign keys. Also known as
NDBCLUSTER. See
Chapter 22, MySQL NDB Cluster 8.0. |
By default, if a storage engine is specified that is not
available, the statement fails with an error. You can override
this behavior by removing
NO_ENGINE_SUBSTITUTION from
the server SQL mode (see Section 5.1.11, “Server SQL Modes”) so that
MySQL allows substitution of the specified engine with the
default storage engine instead. Normally in such cases, this
is InnoDB, which is the default value for
the default_storage_engine
system variable. When
NO_ENGINE_SUBSTITUTION is disabled, a
warning occurs if the storage engine specification is not
honored.
AUTO_INCREMENT
The initial AUTO_INCREMENT value for the
table. In MySQL 8.0, this works for
MyISAM, MEMORY,
InnoDB, and ARCHIVE
tables. To set the first auto-increment value for engines that
do not support the AUTO_INCREMENT table
option, insert a “dummy” row with a value one
less than the desired value after creating the table, and then
delete the dummy row.
For engines that support the AUTO_INCREMENT
table option in CREATE TABLE
statements, you can also use ALTER TABLE
to reset the
tbl_name AUTO_INCREMENT =
NAUTO_INCREMENT value. The value cannot be
set lower than the maximum value currently in the column.
AVG_ROW_LENGTH
An approximation of the average row length for your table. You need to set this only for large tables with variable-size rows.
When you create a MyISAM table, MySQL uses
the product of the MAX_ROWS and
AVG_ROW_LENGTH options to decide how big
the resulting table is. If you don't specify either option,
the maximum size for MyISAM data and index
files is 256TB by default. (If your operating system does not
support files that large, table sizes are constrained by the
file size limit.) If you want to keep down the pointer sizes
to make the index smaller and faster and you don't really need
big files, you can decrease the default pointer size by
setting the
myisam_data_pointer_size
system variable. (See
Section 5.1.8, “Server System Variables”.) If you want all
your tables to be able to grow above the default limit and are
willing to have your tables slightly slower and larger than
necessary, you can increase the default pointer size by
setting this variable. Setting the value to 7 permits table
sizes up to 65,536TB.
[DEFAULT] CHARACTER SET
Specifies a default character set for the table.
CHARSET is a synonym for CHARACTER
SET. If the character set name is
DEFAULT, the database character set is
used.
CHECKSUM
Set this to 1 if you want MySQL to maintain a live checksum
for all rows (that is, a checksum that MySQL updates
automatically as the table changes). This makes the table a
little slower to update, but also makes it easier to find
corrupted tables. The CHECKSUM
TABLE statement reports the checksum.
(MyISAM only.)
[DEFAULT] COLLATE
Specifies a default collation for the table.
COMMENT
A comment for the table, up to 2048 characters long.
You can set the InnoDB
MERGE_THRESHOLD value for a table using the
table_optionCOMMENT clause. See
Section 15.8.11, “Configuring the Merge Threshold for Index Pages”.
Setting NDB_TABLE options.
The table comment in a CREATE TABLE that
creates an NDB table or an
ALTER TABLE statement which
alters one can also be used to specify one to four of the
NDB_TABLE options
NOLOGGING,
READ_BACKUP,
PARTITION_BALANCE, or
FULLY_REPLICATED as a set of name-value
pairs, separated by commas if need be, immediately following
the string NDB_TABLE= that begins the
quoted comment text. An example statement using this syntax
is shown here (emphasized text):
CREATE TABLE t1 (
c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
c2 VARCHAR(100),
c3 VARCHAR(100) )
ENGINE=NDBCOMMENT="NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE";
Spaces are not permitted within the quoted string. The string is case-insensitive.
The comment is displayed as part of the ouput of
SHOW CREATE TABLE. The text of
the comment is also available as the TABLE_COMMENT column of
the MySQL Information Schema
TABLES table.
This comment syntax is also supported with
ALTER TABLE statements for
NDB tables. Keep in mind that a table
comment used with ALTER TABLE replaces any
existing comment which the table might have had perviously.
Setting the MERGE_THRESHOLD option in table
comments is not supported for NDB
tables (it is ignored).
For complete syntax information and examples, see Section 13.1.20.10, “Setting NDB_TABLE Options”.
COMPRESSION
The compression algorithm used for page level compression for
InnoDB tables. Supported values include
Zlib, LZ4, and
None. The COMPRESSION
attribute was introduced with the transparent page compression
feature. Page compression is only supported with
InnoDB tables that reside in
file-per-table
tablespaces, and is only available on Linux and Windows
platforms that support sparse files and hole punching. For
more information, see
Section 15.9.2, “InnoDB Page Compression”.
CONNECTION
The connection string for a FEDERATED
table.
Older versions of MySQL used a COMMENT
option for the connection string.
DATA DIRECTORY, INDEX
DIRECTORY
For InnoDB, the DATA
DIRECTORY='
clause permits creating tables outside of the data directory.
The directory'innodb_file_per_table
variable must be enabled to use the DATA
DIRECTORY clause. The full directory path must be
specified. For more information, see
Section 15.6.1.2, “Creating Tables Externally”.
When creating MyISAM tables, you can use
the DATA
DIRECTORY='
clause, the directory'INDEX
DIRECTORY='
clause, or both. They specify where to put a
directory'MyISAM table's data file and index file,
respectively. Unlike InnoDB tables, MySQL
does not create subdirectories that correspond to the database
name when creating a MyISAM table with a
DATA DIRECTORY or INDEX
DIRECTORY option. Files are created in the directory
that is specified.
You must have the FILE
privilege to use the DATA DIRECTORY or
INDEX DIRECTORY table option.
Table-level DATA DIRECTORY and
INDEX DIRECTORY options are ignored for
partitioned tables. (Bug #32091)
These options work only when you are not using the
--skip-symbolic-links
option. Your operating system must also have a working,
thread-safe realpath() call. See
Section 8.12.2.2, “Using Symbolic Links for MyISAM Tables on Unix”, for more complete
information.
If a MyISAM table is created with no
DATA DIRECTORY option, the
.MYD file is created in the database
directory. By default, if MyISAM finds an
existing .MYD file in this case, it
overwrites it. The same applies to .MYI
files for tables created with no INDEX
DIRECTORY option. To suppress this behavior, start
the server with the
--keep_files_on_create option,
in which case MyISAM will not overwrite
existing files and returns an error instead.
If a MyISAM table is created with a
DATA DIRECTORY or INDEX
DIRECTORY option and an existing
.MYD or .MYI file is
found, MyISAM always returns an error. It will not overwrite a
file in the specified directory.
You cannot use path names that contain the MySQL data
directory with DATA DIRECTORY or
INDEX DIRECTORY. This includes
partitioned tables and individual table partitions. (See Bug
#32167.)
DELAY_KEY_WRITE
Set this to 1 if you want to delay key updates for the table
until the table is closed. See the description of the
delay_key_write system
variable in Section 5.1.8, “Server System Variables”.
(MyISAM only.)
ENCRYPTION
The ENCRYPTION clause enables or disables
page-level data encryption for an InnoDB
table. A keyring plugin must be installed and configured
before encryption can be enabled. Prior to MySQL 8.0.16, the
ENCRYPTION clause can only be specified
when creating a table in an a file-per-table tablespace. As of
MySQL 8.0.16, the ENCRYPTION clause can
also be specified when creating a table in a general
tablespace.
As of MySQL 8.0.16, a table inherits the default schema
encryption if an ENCRYPTION clause is not
specified. If the
table_encryption_privilege_check
variable is enabled, the
TABLE_ENCRYPTION_ADMIN
privilege is required to create a table with an
ENCRYPTION clause setting that differs from
the default schema encryption. When creating a table in a
general tablespace, table and tablespace encryption must
match.
As of MySQL 8.0.16, specifying an
ENCRYPTION clause with a value other than
'N' or '' is not
permitted when using a storage engine that does not support
encryption. Previously, the clause was accepted.
For more information, see Section 15.13, “InnoDB Data-at-Rest Encryption”.
INSERT_METHOD
If you want to insert data into a MERGE
table, you must specify with INSERT_METHOD
the table into which the row should be inserted.
INSERT_METHOD is an option useful for
MERGE tables only. Use a value of
FIRST or LAST to have
inserts go to the first or last table, or a value of
NO to prevent inserts. See
Section 16.7, “The MERGE Storage Engine”.
KEY_BLOCK_SIZE
For MyISAM tables,
KEY_BLOCK_SIZE optionally specifies the
size in bytes to use for index key blocks. The value is
treated as a hint; a different size could be used if
necessary. A KEY_BLOCK_SIZE value specified
for an individual index definition overrides the table-level
KEY_BLOCK_SIZE value.
For InnoDB tables,
KEY_BLOCK_SIZE specifies the
page size in kilobytes to use
for compressed
InnoDB tables. The
KEY_BLOCK_SIZE value is treated as a hint;
a different size could be used by InnoDB if
necessary. KEY_BLOCK_SIZE can only be less
than or equal to the
innodb_page_size value. A
value of 0 represents the default compressed page size, which
is half of the
innodb_page_size value.
Depending on
innodb_page_size, possible
KEY_BLOCK_SIZE values include 0, 1, 2, 4,
8, and 16. See Section 15.9.1, “InnoDB Table Compression” for
more information.
Oracle recommends enabling
innodb_strict_mode when
specifying KEY_BLOCK_SIZE for
InnoDB tables. When
innodb_strict_mode is
enabled, specifying an invalid
KEY_BLOCK_SIZE value returns an error. If
innodb_strict_mode is
disabled, an invalid KEY_BLOCK_SIZE value
results in a warning, and the
KEY_BLOCK_SIZE option is ignored.
The Create_options column in response to
SHOW TABLE STATUS reports the
actual KEY_BLOCK_SIZE used by the table, as
does SHOW CREATE TABLE.
InnoDB only supports
KEY_BLOCK_SIZE at the table level.
KEY_BLOCK_SIZE is not supported with 32KB
and 64KB innodb_page_size
values. InnoDB table compression does not
support these pages sizes.
InnoDB does not support the
KEY_BLOCK_SIZE option when creating
temporary tables.
MAX_ROWS
The maximum number of rows you plan to store in the table. This is not a hard limit, but rather a hint to the storage engine that the table must be able to store at least this many rows.
The use of MAX_ROWS with
NDB tables to control the number of table
partitions is deprecated. It remains supported in later
versions for backward compatibility, but is subject to
removal in a future release. Use PARTITION_BALANCE instead;
see
Setting NDB_TABLE options.
The NDB storage engine treats
this value as a maximum. If you plan to create very large NDB
Cluster tables (containing millions of rows), you should use
this option to insure that NDB
allocates sufficient number of index slots in the hash table
used for storing hashes of the table's primary keys by
setting MAX_ROWS = 2 *
, where
rowsrows is the number of rows that you
expect to insert into the table.
The maximum MAX_ROWS value is 4294967295;
larger values are truncated to this limit.
MIN_ROWS
The minimum number of rows you plan to store in the table. The
MEMORY storage engine uses this
option as a hint about memory use.
PACK_KEYS
Takes effect only with MyISAM tables. Set
this option to 1 if you want to have smaller indexes. This
usually makes updates slower and reads faster. Setting the
option to 0 disables all packing of keys. Setting it to
DEFAULT tells the storage engine to pack
only long CHAR,
VARCHAR,
BINARY, or
VARBINARY columns.
If you do not use PACK_KEYS, the default is
to pack strings, but not numbers. If you use
PACK_KEYS=1, numbers are packed as well.
When packing binary number keys, MySQL uses prefix compression:
Every key needs one extra byte to indicate how many bytes of the previous key are the same for the next key.
The pointer to the row is stored in high-byte-first order directly after the key, to improve compression.
This means that if you have many equal keys on two consecutive
rows, all following “same” keys usually only take
two bytes (including the pointer to the row). Compare this to
the ordinary case where the following keys takes
storage_size_for_key + pointer_size (where
the pointer size is usually 4). Conversely, you get a
significant benefit from prefix compression only if you have
many numbers that are the same. If all keys are totally
different, you use one byte more per key, if the key is not a
key that can have NULL values. (In this
case, the packed key length is stored in the same byte that is
used to mark if a key is NULL.)
PASSWORD
This option is unused.
ROW_FORMAT
Defines the physical format in which the rows are stored.
When creating a table with
strict mode disabled,
the storage engine's default row format is used if the
specified row format is not supported. The actual row format
of the table is reported in the Row_format
column in response to SHOW TABLE
STATUS. The Create_options column
shows the row format that was specified in the
CREATE TABLE statement, as does
SHOW CREATE TABLE.
Row format choices differ depending on the storage engine used for the table.
For InnoDB tables:
The default row format is defined by
innodb_default_row_format,
which has a default setting of DYNAMIC.
The default row format is used when the
ROW_FORMAT option is not defined or
when ROW_FORMAT=DEFAULT is used.
If the ROW_FORMAT option is not
defined, or if ROW_FORMAT=DEFAULT is
used, operations that rebuild a table also silently change
the row format of the table to the default defined by
innodb_default_row_format.
For more information, see
Defining the Row Format of a Table.
For more efficient InnoDB storage of
data types, especially BLOB
types, use the DYNAMIC. See
DYNAMIC Row Format for
requirements associated with the
DYNAMIC row format.
To enable compression for InnoDB
tables, specify ROW_FORMAT=COMPRESSED.
The ROW_FORMAT=COMPRESSED option is not
supported when creating temporary tables. See
Section 15.9, “InnoDB Table and Page Compression” for requirements
associated with the COMPRESSED row
format.
The row format used in older versions of MySQL can still
be requested by specifying the
REDUNDANT row format.
When you specify a non-default
ROW_FORMAT clause, consider also
enabling the
innodb_strict_mode
configuration option.
ROW_FORMAT=FIXED is not supported. If
ROW_FORMAT=FIXED is specified while
innodb_strict_mode is
disabled, InnoDB issues a warning and
assumes ROW_FORMAT=DYNAMIC. If
ROW_FORMAT=FIXED is specified while
innodb_strict_mode is
enabled, which is the default, InnoDB
returns an error.
For additional information about InnoDB
row formats, see Section 15.10, “InnoDB Row Formats”.
For MyISAM tables, the option value can be
FIXED or DYNAMIC for
static or variable-length row format.
myisampack sets the type to
COMPRESSED. See
Section 16.2.3, “MyISAM Table Storage Formats”.
For NDB tables, the default
ROW_FORMAT is DYNAMIC.
STATS_AUTO_RECALC
Specifies whether to automatically recalculate
persistent
statistics for an InnoDB table. The
value DEFAULT causes the persistent
statistics setting for the table to be determined by the
innodb_stats_auto_recalc
configuration option. The value 1 causes
statistics to be recalculated when 10% of the data in the
table has changed. The value 0 prevents
automatic recalculation for this table; with this setting,
issue an ANALYZE TABLE
statement to recalculate the statistics after making
substantial changes to the table. For more information about
the persistent statistics feature, see
Section 15.8.10.1, “Configuring Persistent Optimizer Statistics Parameters”.
STATS_PERSISTENT
Specifies whether to enable
persistent
statistics for an InnoDB table. The
value DEFAULT causes the persistent
statistics setting for the table to be determined by the
innodb_stats_persistent
configuration option. The value 1 enables
persistent statistics for the table, while the value
0 turns off this feature. After enabling
persistent statistics through a CREATE
TABLE or ALTER TABLE statement,
issue an ANALYZE TABLE
statement to calculate the statistics, after loading
representative data into the table. For more information about
the persistent statistics feature, see
Section 15.8.10.1, “Configuring Persistent Optimizer Statistics Parameters”.
STATS_SAMPLE_PAGES
The number of index pages to sample when estimating
cardinality and other statistics for an indexed column, such
as those calculated by ANALYZE
TABLE. For more information, see
Section 15.8.10.1, “Configuring Persistent Optimizer Statistics Parameters”.
TABLESPACE
The TABLESPACE clause can be used to create
a table in an existing general tablespace, a file-per-table
tablespace, or the system tablespace.
CREATE TABLEtbl_name... TABLESPACE [=]tablespace_name
The general tablespace that you specify must exist prior to
using the TABLESPACE clause. For
information about general tablespaces, see
Section 15.6.3.3, “General Tablespaces”.
The
is a case-sensitive identifier. It may be quoted or unquoted.
The forward slash character (“/”) is not
permitted. Names beginning with “innodb_” are
reserved for special use.
tablespace_name
To create a table in the system tablespace, specify
innodb_system as the tablespace name.
CREATE TABLE tbl_name ... TABLESPACE [=] innodb_system
Using TABLESPACE [=] innodb_system, you can
place a table of any uncompressed row format in the system
tablespace regardless of the
innodb_file_per_table
setting. For example, you can add a table with
ROW_FORMAT=DYNAMIC to the system tablespace
using TABLESPACE [=] innodb_system.
To create a table in a file-per-table tablespace, specify
innodb_file_per_table as the tablespace
name.
CREATE TABLE tbl_name ... TABLESPACE [=] innodb_file_per_table
If innodb_file_per_table is
enabled, you need not specify
TABLESPACE=innodb_file_per_table to
create an InnoDB file-per-table
tablespace. InnoDB tables are created in
file-per-table tablespaces by default when
innodb_file_per_table is
enabled.
The DATA DIRECTORY clause is permitted with
CREATE TABLE ...
TABLESPACE=innodb_file_per_table but is otherwise
not supported for use in combination with the
TABLESPACE clause.
Support for TABLESPACE =
innodb_file_per_table and TABLESPACE =
innodb_temporary clauses with
CREATE
TEMPORARY TABLE is deprecated as of MySQL 8.0.13
and will be removed in a future version of MySQL.
The STORAGE table option is employed only
with NDB tables.
STORAGE determines the type of storage used
(disk or memory), and can be either DISK or
MEMORY.
TABLESPACE ... STORAGE DISK assigns a table
to an NDB Cluster Disk Data tablespace. The tablespace must
already have been created using CREATE
TABLESPACE. See
Section 22.5.10, “NDB Cluster Disk Data Tables”, for more
information.
A STORAGE clause cannot be used in a
CREATE TABLE statement
without a TABLESPACE clause.
Used to access a collection of identical
MyISAM tables as one. This works only with
MERGE tables. See
Section 16.7, “The MERGE Storage Engine”.
You must have SELECT,
UPDATE, and
DELETE privileges for the
tables you map to a MERGE table.
Formerly, all tables used had to be in the same database as
the MERGE table itself. This restriction
no longer applies.
partition_options can be used to
control partitioning of the table created with
CREATE TABLE.
Not all options shown in the syntax for
partition_options at the beginning of
this section are available for all partitioning types. Please see
the listings for the following individual types for information
specific to each type, and see Chapter 23, Partitioning, for
more complete information about the workings of and uses for
partitioning in MySQL, as well as additional examples of table
creation and other statements relating to MySQL partitioning.
Partitions can be modified, merged, added to tables, and dropped from tables. For basic information about the MySQL statements to accomplish these tasks, see Section 13.1.9, “ALTER TABLE Statement”. For more detailed descriptions and examples, see Section 23.3, “Partition Management”.
PARTITION BY
If used, a partition_options clause
begins with PARTITION BY. This clause
contains the function that is used to determine the partition;
the function returns an integer value ranging from 1 to
num, where
num is the number of partitions.
(The maximum number of user-defined partitions which a table
may contain is 1024; the number of
subpartitions—discussed later in this section—is
included in this maximum.)
The expression (expr) used in a
PARTITION BY clause cannot refer to any
columns not in the table being created; such references are
specifically not permitted and cause the statement to fail
with an error. (Bug #29444)
HASH(
expr)
Hashes one or more columns to create a key for placing and
locating rows. expr is an
expression using one or more table columns. This can be any
valid MySQL expression (including MySQL functions) that yields
a single integer value. For example, these are both valid
CREATE TABLE statements using
PARTITION BY HASH:
CREATE TABLE t1 (col1 INT, col2 CHAR(5))
PARTITION BY HASH(col1);
CREATE TABLE t1 (col1 INT, col2 CHAR(5), col3 DATETIME)
PARTITION BY HASH ( YEAR(col3) );
You may not use either VALUES LESS THAN or
VALUES IN clauses with PARTITION
BY HASH.
PARTITION BY HASH uses the remainder of
expr divided by the number of
partitions (that is, the modulus). For examples and additional
information, see Section 23.2.4, “HASH Partitioning”.
The LINEAR keyword entails a somewhat
different algorithm. In this case, the number of the partition
in which a row is stored is calculated as the result of one or
more logical AND operations. For
discussion and examples of linear hashing, see
Section 23.2.4.1, “LINEAR HASH Partitioning”.
KEY(
column_list)
This is similar to HASH, except that MySQL
supplies the hashing function so as to guarantee an even data
distribution. The column_list
argument is simply a list of 1 or more table columns (maximum:
16). This example shows a simple table partitioned by key,
with 4 partitions:
CREATE TABLE tk (col1 INT, col2 CHAR(5), col3 DATE)
PARTITION BY KEY(col3)
PARTITIONS 4;
For tables that are partitioned by key, you can employ linear
partitioning by using the LINEAR keyword.
This has the same effect as with tables that are partitioned
by HASH. That is, the partition number is
found using the
&
operator rather than the modulus (see
Section 23.2.4.1, “LINEAR HASH Partitioning”, and
Section 23.2.5, “KEY Partitioning”, for details). This example
uses linear partitioning by key to distribute data between 5
partitions:
CREATE TABLE tk (col1 INT, col2 CHAR(5), col3 DATE)
PARTITION BY LINEAR KEY(col3)
PARTITIONS 5;
The ALGORITHM={1 | 2} option is supported
with [SUB]PARTITION BY [LINEAR] KEY.
ALGORITHM=1 causes the server to use the
same key-hashing functions as MySQL 5.1;
ALGORITHM=2 means that the server employs
the key-hashing functions implemented and used by default for
new KEY partitioned tables in MySQL 5.5 and
later. (Partitioned tables created with the key-hashing
functions employed in MySQL 5.5 and later cannot be used by a
MySQL 5.1 server.) Not specifying the option has the same
effect as using ALGORITHM=2. This option is
intended for use chiefly when upgrading or downgrading
[LINEAR] KEY partitioned tables between
MySQL 5.1 and later MySQL versions, or for creating tables
partitioned by KEY or LINEAR
KEY on a MySQL 5.5 or later server which can be used
on a MySQL 5.1 server. For more information, see
Section 13.1.9.1, “ALTER TABLE Partition Operations”.
mysqldump in MySQL 5.7 (and later) writes this option encased in versioned comments, like this:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
This causes MySQL 5.6.10 and earlier servers to ignore the
option, which would otherwise cause a syntax error in those
versions. If you plan to load a dump made on a MySQL 5.7
server where you use tables that are partitioned or
subpartitioned by KEY into a MySQL 5.6
server previous to version 5.6.11, be sure to consult
Changes in MySQL 5.6,
before proceeding. (The information found there also applies
if you are loading a dump containing KEY
partitioned or subpartitioned tables made from a MySQL
5.7—actually 5.6.11 or later—server into a MySQL
5.5.30 or earlier server.)
Also in MySQL 5.6.11 and later, ALGORITHM=1
is shown when necessary in the output of
SHOW CREATE TABLE using
versioned comments in the same manner as
mysqldump. ALGORITHM=2
is always omitted from SHOW CREATE TABLE
output, even if this option was specified when creating the
original table.
You may not use either VALUES LESS THAN or
VALUES IN clauses with PARTITION
BY KEY.
RANGE(
expr)
In this case, expr shows a range of
values using a set of VALUES LESS THAN
operators. When using range partitioning, you must define at
least one partition using VALUES LESS THAN.
You cannot use VALUES IN with range
partitioning.
For tables partitioned by RANGE,
VALUES LESS THAN must be used with either
an integer literal value or an expression that evaluates to
a single integer value. In MySQL 8.0, you can
overcome this limitation in a table that is defined using
PARTITION BY RANGE COLUMNS, as described
later in this section.
Suppose that you have a table that you wish to partition on a column containing year values, according to the following scheme.
| Partition Number: | Years Range: |
|---|---|
| 0 | 1990 and earlier |
| 1 | 1991 to 1994 |
| 2 | 1995 to 1998 |
| 3 | 1999 to 2002 |
| 4 | 2003 to 2005 |
| 5 | 2006 and later |
A table implementing such a partitioning scheme can be
realized by the CREATE TABLE
statement shown here:
CREATE TABLE t1 (
year_col INT,
some_data INT
)
PARTITION BY RANGE (year_col) (
PARTITION p0 VALUES LESS THAN (1991),
PARTITION p1 VALUES LESS THAN (1995),
PARTITION p2 VALUES LESS THAN (1999),
PARTITION p3 VALUES LESS THAN (2002),
PARTITION p4 VALUES LESS THAN (2006),
PARTITION p5 VALUES LESS THAN MAXVALUE
);
PARTITION ... VALUES LESS THAN ...
statements work in a consecutive fashion. VALUES LESS
THAN MAXVALUE works to specify
“leftover” values that are greater than the
maximum value otherwise specified.
VALUES LESS THAN clauses work sequentially
in a manner similar to that of the case
portions of a switch ... case block (as
found in many programming languages such as C, Java, and PHP).
That is, the clauses must be arranged in such a way that the
upper limit specified in each successive VALUES LESS
THAN is greater than that of the previous one, with
the one referencing MAXVALUE coming last of
all in the list.
RANGE
COLUMNS(
column_list)
This variant on RANGE facilitates partition
pruning for queries using range conditions on multiple columns
(that is, having conditions such as WHERE a = 1 AND b
< 10 or WHERE a = 1 AND b = 10 AND c
< 10). It enables you to specify value ranges in
multiple columns by using a list of columns in the
COLUMNS clause and a set of column values
in each PARTITION ... VALUES LESS THAN
( partition
definition clause. (In the simplest case, this set consists of
a single column.) The maximum number of columns that can be
referenced in the value_list)column_list and
value_list is 16.
The column_list used in the
COLUMNS clause may contain only names of
columns; each column in the list must be one of the following
MySQL data types: the integer types; the string types; and
time or date column types. Columns using
BLOB, TEXT,
SET, ENUM,
BIT, or spatial data types are not
permitted; columns that use floating-point number types are
also not permitted. You also may not use functions or
arithmetic expressions in the COLUMNS
clause.
The VALUES LESS THAN clause used in a
partition definition must specify a literal value for each
column that appears in the COLUMNS()
clause; that is, the list of values used for each
VALUES LESS THAN clause must contain the
same number of values as there are columns listed in the
COLUMNS clause. An attempt to use more or
fewer values in a VALUES LESS THAN clause
than there are in the COLUMNS clause causes
the statement to fail with the error Inconsistency
in usage of column lists for partitioning.... You
cannot use NULL for any value appearing in
VALUES LESS THAN. It is possible to use
MAXVALUE more than once for a given column
other than the first, as shown in this example:
CREATE TABLE rc (
a INT NOT NULL,
b INT NOT NULL
)
PARTITION BY RANGE COLUMNS(a,b) (
PARTITION p0 VALUES LESS THAN (10,5),
PARTITION p1 VALUES LESS THAN (20,10),
PARTITION p2 VALUES LESS THAN (50,MAXVALUE),
PARTITION p3 VALUES LESS THAN (65,MAXVALUE),
PARTITION p4 VALUES LESS THAN (MAXVALUE,MAXVALUE)
);
Each value used in a VALUES LESS THAN value
list must match the type of the corresponding column exactly;
no conversion is made. For example, you cannot use the string
'1' for a value that matches a column that
uses an integer type (you must use the numeral
1 instead), nor can you use the numeral
1 for a value that matches a column that
uses a string type (in such a case, you must use a quoted
string: '1').
For more information, see Section 23.2.1, “RANGE Partitioning”, and Section 23.4, “Partition Pruning”.
LIST(
expr)
This is useful when assigning partitions based on a table
column with a restricted set of possible values, such as a
state or country code. In such a case, all rows pertaining to
a certain state or country can be assigned to a single
partition, or a partition can be reserved for a certain set of
states or countries. It is similar to
RANGE, except that only VALUES
IN may be used to specify permissible values for
each partition.
VALUES IN is used with a list of values to
be matched. For instance, you could create a partitioning
scheme such as the following:
CREATE TABLE client_firms (
id INT,
name VARCHAR(35)
)
PARTITION BY LIST (id) (
PARTITION r0 VALUES IN (1, 5, 9, 13, 17, 21),
PARTITION r1 VALUES IN (2, 6, 10, 14, 18, 22),
PARTITION r2 VALUES IN (3, 7, 11, 15, 19, 23),
PARTITION r3 VALUES IN (4, 8, 12, 16, 20, 24)
);
When using list partitioning, you must define at least one
partition using VALUES IN. You cannot use
VALUES LESS THAN with PARTITION BY
LIST.
For tables partitioned by LIST, the value
list used with VALUES IN must consist of
integer values only. In MySQL 8.0, you can
overcome this limitation using partitioning by LIST
COLUMNS, which is described later in this section.
LIST
COLUMNS(
column_list)
This variant on LIST facilitates partition
pruning for queries using comparison conditions on multiple
columns (that is, having conditions such as WHERE a =
5 AND b = 5 or WHERE a = 1 AND b = 10 AND c
= 5). It enables you to specify values in multiple
columns by using a list of columns in the
COLUMNS clause and a set of column values
in each PARTITION ... VALUES IN
( partition
definition clause.
value_list)
The rules governing regarding data types for the column list
used in LIST
COLUMNS( and
the value list used in column_list)VALUES
IN( are the
same as those for the column list used in value_list)RANGE
COLUMNS( and
the value list used in column_list)VALUES LESS
THAN(,
respectively, except that in the value_list)VALUES IN
clause, MAXVALUE is not permitted, and you
may use NULL.
There is one important difference between the list of values
used for VALUES IN with PARTITION
BY LIST COLUMNS as opposed to when it is used with
PARTITION BY LIST. When used with
PARTITION BY LIST COLUMNS, each element in
the VALUES IN clause must be a
set of column values; the number of
values in each set must be the same as the number of columns
used in the COLUMNS clause, and the data
types of these values must match those of the columns (and
occur in the same order). In the simplest case, the set
consists of a single column. The maximum number of columns
that can be used in the column_list
and in the elements making up the
value_list is 16.
The table defined by the following CREATE
TABLE statement provides an example of a table using
LIST COLUMNS partitioning:
CREATE TABLE lc (
a INT NULL,
b INT NULL
)
PARTITION BY LIST COLUMNS(a,b) (
PARTITION p0 VALUES IN( (0,0), (NULL,NULL) ),
PARTITION p1 VALUES IN( (0,1), (0,2), (0,3), (1,1), (1,2) ),
PARTITION p2 VALUES IN( (1,0), (2,0), (2,1), (3,0), (3,1) ),
PARTITION p3 VALUES IN( (1,3), (2,2), (2,3), (3,2), (3,3) )
);
PARTITIONS
num
The number of partitions may optionally be specified with a
PARTITIONS
clause, where numnum is the number of
partitions. If both this clause and any
PARTITION clauses are used,
num must be equal to the total
number of any partitions that are declared using
PARTITION clauses.
Whether or not you use a PARTITIONS
clause in creating a table that is partitioned by
RANGE or LIST, you
must still include at least one PARTITION
VALUES clause in the table definition (see below).
SUBPARTITION BY
A partition may optionally be divided into a number of
subpartitions. This can be indicated by using the optional
SUBPARTITION BY clause. Subpartitioning may
be done by HASH or KEY.
Either of these may be LINEAR. These work
in the same way as previously described for the equivalent
partitioning types. (It is not possible to subpartition by
LIST or RANGE.)
The number of subpartitions can be indicated using the
SUBPARTITIONS keyword followed by an
integer value.
Rigorous checking of the value used in
PARTITIONS or
SUBPARTITIONS clauses is applied and this
value must adhere to the following rules:
The value must be a positive, nonzero integer.
No leading zeros are permitted.
The value must be an integer literal, and cannot not be an
expression. For example, PARTITIONS
0.2E+01 is not permitted, even though
0.2E+01 evaluates to
2. (Bug #15890)
partition_definition
Each partition may be individually defined using a
partition_definition clause. The
individual parts making up this clause are as follows:
PARTITION
partition_name
Specifies a logical name for the partition.
VALUES
For range partitioning, each partition must include a
VALUES LESS THAN clause; for list
partitioning, you must specify a VALUES
IN clause for each partition. This is used to
determine which rows are to be stored in this partition.
See the discussions of partitioning types in
Chapter 23, Partitioning, for syntax examples.
[STORAGE] ENGINE
MySQL accepts a [STORAGE] ENGINE option
for both PARTITION and
SUBPARTITION. Currently, the only way
in which this option can be used is to set all partitions
or all subpartitions to the same storage engine, and an
attempt to set different storage engines for partitions or
subpartitions in the same table will give rise to the
error ERROR 1469 (HY000): The mix of handlers
in the partitions is not permitted in this version of
MySQL.
COMMENT
An optional COMMENT clause may be used
to specify a string that describes the partition. Example:
COMMENT = 'Data for the years previous to 1999'
The maximum length for a partition comment is 1024 characters.
DATA DIRECTORY and INDEX
DIRECTORY
DATA DIRECTORY and INDEX
DIRECTORY may be used to indicate the directory
where, respectively, the data and indexes for this
partition are to be stored. Both the
and the
data_dir
must be absolute system path names.
index_dir
You must have the FILE
privilege to use the DATA DIRECTORY or
INDEX DIRECTORY partition option.
Example:
CREATE TABLE th (id INT, name VARCHAR(30), adate DATE)
PARTITION BY LIST(YEAR(adate))
(
PARTITION p1999 VALUES IN (1995, 1999, 2003)
DATA DIRECTORY = '/var/appdata/95/data'
INDEX DIRECTORY = '/var/appdata/95/idx',
PARTITION p2000 VALUES IN (1996, 2000, 2004)
DATA DIRECTORY = '/var/appdata/96/data'
INDEX DIRECTORY = '/var/appdata/96/idx',
PARTITION p2001 VALUES IN (1997, 2001, 2005)
DATA DIRECTORY = '/var/appdata/97/data'
INDEX DIRECTORY = '/var/appdata/97/idx',
PARTITION p2002 VALUES IN (1998, 2002, 2006)
DATA DIRECTORY = '/var/appdata/98/data'
INDEX DIRECTORY = '/var/appdata/98/idx'
);
DATA DIRECTORY and INDEX
DIRECTORY behave in the same way as in the
CREATE TABLE
statement's table_option
clause as used for MyISAM tables.
One data directory and one index directory may be specified per partition. If left unspecified, the data and indexes are stored by default in the table's database directory.
The DATA DIRECTORY and INDEX
DIRECTORY options are ignored for creating
partitioned tables if
NO_DIR_IN_CREATE is in
effect.
MAX_ROWS and
MIN_ROWS
May be used to specify, respectively, the maximum and
minimum number of rows to be stored in the partition. The
values for max_number_of_rows
and min_number_of_rows must be
positive integers. As with the table-level options with
the same names, these act only as
“suggestions” to the server and are not hard
limits.
TABLESPACE
May be used to designate an InnoDB
file-per-table tablespace for the partition by specifying
TABLESPACE `innodb_file_per_table`. All
partitions must belong to the same storage engine.
Placing InnoDB table partitions in
shared InnoDB tablespaces is not
supported. Shared tablespaces include the
InnoDB system tablespace and general
tablespaces.
subpartition_definition
The partition definition may optionally contain one or more
subpartition_definition clauses.
Each of these consists at a minimum of the
SUBPARTITION
, where
namename is an identifier for the
subpartition. Except for the replacement of the
PARTITION keyword with
SUBPARTITION, the syntax for a subpartition
definition is identical to that for a partition definition.
Subpartitioning must be done by HASH or
KEY, and can be done only on
RANGE or LIST
partitions. See Section 23.2.6, “Subpartitioning”.
Partitioning by Generated Columns
Partitioning by generated columns is permitted. For example:
CREATE TABLE t1 ( s1 INT, s2 INT AS (EXP(s1)) STORED ) PARTITION BY LIST (s2) ( PARTITION p1 VALUES IN (1) );
Partitioning sees a generated column as a regular column, which
enables workarounds for limitations on functions that are not
permitted for partitioning (see
Section 23.6.3, “Partitioning Limitations Relating to Functions”). The
preceding example demonstrates this technique:
EXP() cannot be used directly in
the PARTITION BY clause, but a generated column
defined using EXP() is permitted.
For an InnoDB table created in a
file-per-table tablespace or general tablespace, table data and
associated indexes are stored in a
.ibd file in the database
directory. When an InnoDB table is created in
the system tablespace, table data and indexes are stored in the
ibdata* files that
represent the system tablespace. The
innodb_file_per_table option
controls whether tables are created in file-per-table
tablespaces or the system tablespace, by default. The
TABLESPACE option can be used to place a
table in a file-per-table tablespace, general tablespace, or the
system tablespace, regardless of the
innodb_file_per_table setting.
For MyISAM tables, the storage engine creates
data and index files. Thus, for each MyISAM
table tbl_name, there are two disk
files.
| File | Purpose |
|---|---|
|
Data file |
|
Index file |
Chapter 16, Alternative Storage Engines, describes what files each storage engine creates to represent tables. If a table name contains special characters, the names for the table files contain encoded versions of those characters as described in Section 9.2.4, “Mapping of Identifiers to File Names”.
You can use the TEMPORARY keyword when
creating a table. A TEMPORARY table is
visible only within the current session, and is dropped
automatically when the session is closed. This means that two
different sessions can use the same temporary table name without
conflicting with each other or with an existing
non-TEMPORARY table of the same name. (The
existing table is hidden until the temporary table is dropped.)
InnoDB does not support compressed temporary
tables. When innodb_strict_mode
is enabled (the default),
CREATE TEMPORARY
TABLE returns an error if
ROW_FORMAT=COMPRESSED or
KEY_BLOCK_SIZE is specified. If
innodb_strict_mode is disabled,
warnings are issued and the temporary table is created using a
non-compressed row format. The
innodb_file_per-table option
does not affect the creation of InnoDB
temporary tables.
CREATE TABLE causes an implicit
commit, except when used with the TEMPORARY
keyword. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
TEMPORARY tables have a very loose
relationship with databases (schemas). Dropping a database does
not automatically drop any TEMPORARY tables
created within that database.
To create a temporary table, you must have the
CREATE TEMPORARY TABLES
privilege. After a session has created a temporary table, the
server performs no further privilege checks on the table. The
creating session can perform any operation on the table, such as
DROP TABLE,
INSERT,
UPDATE, or
SELECT.
One implication of this behavior is that a session can
manipulate its temporary tables even if the current user has no
privilege to create them. Suppose that the current user does not
have the CREATE TEMPORARY TABLES
privilege but is able to execute a definer-context stored
procedure that executes with the privileges of a user who does
have CREATE TEMPORARY TABLES and
that creates a temporary table. While the procedure executes,
the session uses the privileges of the defining user. After the
procedure returns, the effective privileges revert to those of
the current user, which can still see the temporary table and
perform any operation on it.
You cannot use CREATE TEMPORY TABLE ... LIKE
to create an empty table based on the definition of a table that
resides in the mysql tablespace,
InnoDB system tablespace
(innodb_system), or a general tablespace. The
tablespace definition for such a table includes a
TABLESPACE attribute that defines the
tablespace where the table resides, and the aforementioned
tablespaces do not support temporary tables. To create a
temporary table based on the definition of such a table, use
this syntax instead:
CREATE TEMPORARY TABLEnew_tblSELECT * FROMorig_tblLIMIT 0;
Support for TABLESPACE =
innodb_file_per_table and TABLESPACE =
innodb_temporary clauses with
CREATE TEMPORARY
TABLE is deprecated as of MySQL 8.0.13 and will be
removed in a future version of MySQL.
Use CREATE TABLE ... LIKE to create an empty
table based on the definition of another table, including any
column attributes and indexes defined in the original table:
CREATE TABLEnew_tblLIKEorig_tbl;
The copy is created using the same version of the table storage
format as the original table. The
SELECT privilege is required on
the original table.
LIKE works only for base tables, not for
views.
You cannot execute CREATE TABLE or
CREATE TABLE ... LIKE while a
LOCK TABLES statement is in
effect.
CREATE TABLE ...
LIKE makes the same checks as
CREATE TABLE. This means that
if the current SQL mode is different from the mode in effect
when the original table was created, the table definition
might be considered invalid for the new mode and the statement
will fail.
For CREATE TABLE ... LIKE, the destination
table preserves generated column information from the original
table.
For CREATE TABLE ... LIKE, the destination
table preserves expression default values from the original
table.
For CREATE TABLE ... LIKE, the destination
table preserves CHECK constraints from the
original table, except that all the constraint names are
generated.
CREATE TABLE ... LIKE does not preserve any
DATA DIRECTORY or INDEX
DIRECTORY table options that were specified for the
original table, or any foreign key definitions.
If the original table is a TEMPORARY table,
CREATE TABLE ... LIKE does not preserve
TEMPORARY. To create a
TEMPORARY destination table, use
CREATE TEMPORARY TABLE ... LIKE.
Tables created in the mysql tablespace, the
InnoDB system tablespace
(innodb_system), or general tablespaces
include a TABLESPACE attribute in the table
definition, which defines the tablespace where the table
resides. Due to a temporary regression, CREATE TABLE
... LIKE preserves the TABLESPACE
attribute and creates the table in the defined tablespace
regardless of the
innodb_file_per_table setting.
To avoid the TABLESPACE attribute when
creating an empty table based on the definition of such a table,
use this syntax instead:
CREATE TABLEnew_tblSELECT * FROMorig_tblLIMIT 0;
You can create one table from another by adding a
SELECT statement at the end of
the CREATE TABLE statement:
CREATE TABLEnew_tbl[AS] SELECT * FROMorig_tbl;
MySQL creates new columns for all elements in the
SELECT. For example:
mysql>CREATE TABLE test (a INT NOT NULL AUTO_INCREMENT,->PRIMARY KEY (a), KEY(b))->ENGINE=MyISAM SELECT b,c FROM test2;
This creates a MyISAM table with
three columns, a, b, and
c. The ENGINE option is
part of the CREATE TABLE
statement, and should not be used following the
SELECT; this would result in a
syntax error. The same is true for other
CREATE TABLE options such as
CHARSET.
Notice that the columns from the
SELECT statement are appended to
the right side of the table, not overlapped onto it. Take the
following example:
mysql>SELECT * FROM foo;+---+ | n | +---+ | 1 | +---+ mysql>CREATE TABLE bar (m INT) SELECT n FROM foo;Query OK, 1 row affected (0.02 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql>SELECT * FROM bar;+------+---+ | m | n | +------+---+ | NULL | 1 | +------+---+ 1 row in set (0.00 sec)
For each row in table foo, a row is inserted
in bar with the values from
foo and default values for the new columns.
In a table resulting from
CREATE TABLE ...
SELECT, columns named only in the
CREATE TABLE part come first.
Columns named in both parts or only in the
SELECT part come after that. The
data type of SELECT columns can
be overridden by also specifying the column in the
CREATE TABLE part.
If any errors occur while copying the data to the table, it is automatically dropped and not created.
You can precede the SELECT by
IGNORE or REPLACE to
indicate how to handle rows that duplicate unique key values.
With IGNORE, rows that duplicate an existing
row on a unique key value are discarded. With
REPLACE, new rows replace rows that have the
same unique key value. If neither IGNORE nor
REPLACE is specified, duplicate unique key
values result in an error. For more information, see
Comparison of the IGNORE Keyword and Strict SQL Mode.
In MySQL 8.0.19 and later, you can also use a
VALUES statement in the
SELECT part of CREATE
TABLE ... SELECT; the VALUES
portion of the statement must include a table alias using an
AS clause. To name the columns coming from
VALUES, supply column aliases with the table
alias; otherwise, the default column names
column_0, column_1,
column_2, ..., are used.
Otherwise, naming of columns in the table thus created follows the same rules as described previously in this section. Examples:
mysql>CREATE TABLE tv1>SELECT * FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v;mysql>TABLE tv1;+----------+----------+----------+ | column_0 | column_1 | column_2 | +----------+----------+----------+ | 1 | 3 | 5 | | 2 | 4 | 6 | +----------+----------+----------+ mysql> CREATE TABLE tv2 > SELECT * FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(x,y,z); mysql>TABLE tv2;+---+---+---+ | x | y | z | +---+---+---+ | 1 | 3 | 5 | | 2 | 4 | 6 | +---+---+---+ mysql>CREATE TABLE tv3 (a INT, b INT, c INT)>SELECT * FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(x,y,z);mysql> TABLE tv3; +------+------+------+----------+----------+----------+ | a | b | c | column_0 | column_1 | column_2 | +------+------+------+----------+----------+----------+ | NULL | NULL | NULL | 1 | 3 | 5 | | NULL | NULL | NULL | 2 | 4 | 6 | +------+------+------+----------+----------+----------+ mysql>CREATE TABLE tv4 (a INT, b INT, c INT)>SELECT * FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(x,y,z);mysql>TABLE tv4;+------+------+------+---+---+---+ | a | b | c | x | y | z | +------+------+------+---+---+---+ | NULL | NULL | NULL | 1 | 3 | 5 | | NULL | NULL | NULL | 2 | 4 | 6 | +------+------+------+---+---+---+ mysql>CREATE TABLE tv5 (a INT, b INT, c INT)>SELECT * FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(a,b,c);mysql>TABLE tv5;+------+------+------+ | a | b | c | +------+------+------+ | 1 | 3 | 5 | | 2 | 4 | 6 | +------+------+------+
When selecting all columns and using the default column names,
you can omit SELECT *, so the statement just
used to create table tv1 can also be written
as shown here:
mysql>CREATE TABLE tv1 VALUES ROW(1,3,5), ROW(2,4,6);mysql>TABLE tv1;+----------+----------+----------+ | column_0 | column_1 | column_2 | +----------+----------+----------+ | 1 | 3 | 5 | | 2 | 4 | 6 | +----------+----------+----------+
When using VALUES as the source
of the SELECT, all columns are
always selected into the new table, and individual columns
cannot be selected as they can be when selecting from a named
table; each of the following statements produces an error
(ER_OPERAND_COLUMNS):
CREATE TABLE tvx
SELECT (x,z) FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(x,y,z);
CREATE TABLE tvx (a INT, c INT)
SELECT (x,z) FROM (VALUES ROW(1,3,5), ROW(2,4,6)) AS v(x,y,z);
Similarly, you can use a TABLE
statement in place of the SELECT.
This follows the same rules as with
VALUES; all columns of the source
table and their names in the source table are always inserted
into the new table. Examples:
mysql>TABLE t1;+----+----+ | a | b | +----+----+ | 1 | 2 | | 6 | 7 | | 10 | -4 | | 14 | 6 | +----+----+ mysql>CREATE TABLE tt1 TABLE t1;mysql>TABLE tt1;+----+----+ | a | b | +----+----+ | 1 | 2 | | 6 | 7 | | 10 | -4 | | 14 | 6 | +----+----+ mysql>CREATE TABLE tt2 (x INT) TABLE t1;mysql>TABLE tt2;+------+----+----+ | x | a | b | +------+----+----+ | NULL | 1 | 2 | | NULL | 6 | 7 | | NULL | 10 | -4 | | NULL | 14 | 6 | +------+----+----+
Because the ordering of the rows in the underlying
SELECT statements cannot always
be determined, CREATE TABLE ... IGNORE SELECT
and CREATE TABLE ... REPLACE SELECT
statements are flagged as unsafe for statement-based
replication. Such statements produce a warning in the error log
when using statement-based mode and are written to the binary
log using the row-based format when using
MIXED mode. See also
Section 17.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based
Replication”.
CREATE TABLE ...
SELECT does not automatically create any indexes for
you. This is done intentionally to make the statement as
flexible as possible. If you want to have indexes in the created
table, you should specify these before the
SELECT statement:
mysql> CREATE TABLE bar (UNIQUE (n)) SELECT n FROM foo;
For CREATE TABLE ... SELECT, the destination
table does not preserve information about whether columns in the
selected-from table are generated columns. The
SELECT part of the statement
cannot assign values to generated columns in the destination
table.
For CREATE TABLE ... SELECT, the destination
table does preserve expression default values from the original
table.
Some conversion of data types might occur. For example, the
AUTO_INCREMENT attribute is not preserved,
and VARCHAR columns can become
CHAR columns. Retrained
attributes are NULL (or NOT
NULL) and, for those columns that have them,
CHARACTER SET, COLLATION,
COMMENT, and the DEFAULT
clause.
When creating a table with
CREATE
TABLE ... SELECT, make sure to alias any function
calls or expressions in the query. If you do not, the
CREATE statement might fail or result in
undesirable column names.
CREATE TABLE artists_and_works SELECT artist.name, COUNT(work.artist_id) AS number_of_works FROM artist LEFT JOIN work ON artist.id = work.artist_id GROUP BY artist.id;
You can also explicitly specify the data type for a column in the created table:
CREATE TABLE foo (a TINYINT NOT NULL) SELECT b+1 AS a FROM bar;
For CREATE TABLE
... SELECT, if IF NOT EXISTS is
given and the target table exists, nothing is inserted into the
destination table, and the statement is not logged.
To ensure that the binary log can be used to re-create the
original tables, MySQL does not permit concurrent inserts during
CREATE TABLE ...
SELECT.
You cannot use FOR UPDATE as part of the
SELECT in a statement such as
CREATE
TABLE . If you
attempt to do so, the statement fails.
new_table SELECT ... FROM
old_table ...
MySQL supports foreign keys, which permit cross-referencing related data across tables, and foreign key constraints, which help keep the related data consistent.
A foreign key relationship involves a parent table that holds the initial column values, and a child table with column values that reference the parent column values. A foreign key constraint is defined on the child table.
The essential syntax for a defining a foreign key constraint in
a CREATE TABLE or
ALTER TABLE statement includes
the following:
[CONSTRAINT [symbol]] FOREIGN KEY [index_name] (col_name, ...) REFERENCEStbl_name(col_name,...) [ON DELETEreference_option] [ON UPDATEreference_option]reference_option: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT
Foreign key constraint usage is described under the following topics in this section:
Foreign key constraint naming is governed by the following rules:
The CONSTRAINT
symbol value is used, if
defined.
If the CONSTRAINT
symbol clause is not defined,
or a symbol is not included following the
CONSTRAINT keyword, a constraint name
name is generated automatically.
Prior to MySQL 8.0.16, if the
CONSTRAINT
symbol clause was not defined,
or a symbol was not included following the
CONSTRAINT keyword, both
InnoDB and
NDB storage engines would use
the FOREIGN_KEY
if
defined. In MySQL 8.0.16 and higher, the
index_nameFOREIGN_KEY
is
ignored.
index_name
The CONSTRAINT
value, if
defined, must be unique in the database. A duplicate
symbolsymbol results in an error
similar to: ERROR 1005 (HY000): Can't create
table 'test.fk1' (errno: 121).
NDB Cluster stores foreign names using the same lettercase
with which they are created. Prior to version 8.0.20, when
processing SELECT and other
SQL statements, NDB compared
the names of foreign keys in such statements with the
names as stored in a case-sensitive fashion when
lower_case_table_names
was equal to 0. In NDB 8.0.20 and later, this value no
longer has any effect on how such comparisons are made,
and they are always done without regard to lettercase.
(Bug #30512043)
Table and column identifiers in a FOREIGN KEY ...
REFERENCES clause can be quoted within backticks
(`). Alternatively, double quotation marks
(") can be used if the
ANSI_QUOTES SQL mode is
enabled. The
lower_case_table_names system
variable setting is also taken into account.
Foreign key constraints are subject to the following conditions and restrictions:
Parent and child tables must use the same storage engine, and they cannot be defined as temporary tables.
Creating a foreign key constraint requires the
REFERENCES privilege on the
parent table.
Corresponding columns in the foreign key and the referenced key must have similar data types. The size and sign of integer types must be the same. The length of string types need not be the same. For nonbinary (character) string columns, the character set and collation must be the same.
MySQL supports foreign key references between one column and another within a table. (A column cannot have a foreign key reference to itself.) In these cases, a “child table record” refers to a dependent record within the same table.
MySQL requires indexes on foreign keys and referenced keys
so that foreign key checks can be fast and not require a
table scan. In the referencing table, there must be an
index where the foreign key columns are listed as the
first columns in the same order. Such
an index is created on the referencing table automatically
if it does not exist. This index might be silently dropped
later if you create another index that can be used to
enforce the foreign key constraint.
index_name, if given, is used
as described previously.
InnoDB permits a foreign key to
reference any index column or group of columns. However,
in the referenced table, there must be an index where the
referenced columns are the first
columns in the same order. Hidden columns that
InnoDB adds to an index are also
considered (see Section 15.6.2.1, “Clustered and Secondary Indexes”).
NDB requires an explicit unique key (or
primary key) on any column referenced as a foreign key.
InnoDB does not, which is an extension
of standard SQL.
Index prefixes on foreign key columns are not supported.
Consequently, BLOB and
TEXT columns cannot be
included in a foreign key because indexes on those columns
must always include a prefix length.
InnoDB does not currently
support foreign keys for tables with user-defined
partitioning. This includes both parent and child tables.
This restriction does not apply for
NDB tables that are
partitioned by KEY or LINEAR
KEY (the only user partitioning types supported
by the NDB storage engine); these may
have foreign key references or be the targets of such
references.
A table in a foreign key relationship cannot be altered to use another storage engine. To change the storage engine, you must drop any foreign key constraints first.
A foreign key constraint cannot reference a virtual generated column.
For information about how the MySQL implementation of foreign key constraints differs from the SQL standard, see Section 1.8.2.3, “FOREIGN KEY Constraint Differences”.
When an UPDATE or
DELETE operation affects a key
value in the parent table that has matching rows in the child
table, the result depends on the referential
action specified by ON UPDATE
and ON DELETE subclauses of the
FOREIGN KEY clause. Referential actions
include:
CASCADE: Delete or update the row from
the parent table and automatically delete or update the
matching rows in the child table. Both ON DELETE
CASCADE and ON UPDATE CASCADE
are supported. Between two tables, do not define several
ON UPDATE CASCADE clauses that act on
the same column in the parent table or in the child table.
If a FOREIGN KEY clause is defined on
both tables in a foreign key relationship, making both
tables a parent and child, an ON UPDATE
CASCADE or ON DELETE CASCADE
subclause defined for one FOREIGN KEY
clause must be defined for the other in order for
cascading operations to succeed. If an ON UPDATE
CASCADE or ON DELETE CASCADE
subclause is only defined for one FOREIGN
KEY clause, cascading operations fail with an
error.
Cascaded foreign key actions do not activate triggers.
SET NULL: Delete or update the row from
the parent table and set the foreign key column or columns
in the child table to NULL. Both
ON DELETE SET NULL and ON
UPDATE SET NULL clauses are supported.
If you specify a SET NULL action,
make sure that you have not declared the columns
in the child table as NOT
NULL.
RESTRICT: Rejects the delete or update
operation for the parent table. Specifying
RESTRICT (or NO
ACTION) is the same as omitting the ON
DELETE or ON UPDATE clause.
NO ACTION: A keyword from standard SQL.
In MySQL, equivalent to RESTRICT. The
MySQL Server rejects the delete or update operation for
the parent table if there is a related foreign key value
in the referenced table. Some database systems have
deferred checks, and NO ACTION is a
deferred check. In MySQL, foreign key constraints are
checked immediately, so NO ACTION is
the same as RESTRICT.
SET DEFAULT: This action is recognized
by the MySQL parser, but both
InnoDB and
NDB reject table definitions
containing ON DELETE SET DEFAULT or
ON UPDATE SET DEFAULT clauses.
For storage engines that support foreign keys, MySQL rejects
any INSERT or
UPDATE operation that attempts
to create a foreign key value in a child table if there is no
matching candidate key value in the parent table.
For an ON DELETE or ON
UPDATE that is not specified, the default action is
always NO ACTION.
As the default, an ON DELETE NO ACTION or
ON UPDATE NO ACTION clause that is
specified explicitly does not appear in
SHOW CREATE TABLE output or in
tables dumped with mysqldump.
RESTRICT, which is an equivalent
non-default keyword, appears in SHOW
CREATE TABLE output and in tables dumped with
mysqldump.
For NDB tables, ON
UPDATE CASCADE is not supported where the reference
is to the parent table's primary key.
As of NDB 8.0.16: For NDB tables,
ON DELETE CASCADE is not supported where
the child table contains one or more columns of any of the
TEXT or
BLOB types. (Bug #89511, Bug
#27484882)
InnoDB performs cascading operations using
a depth-first search algorithm on the records of the index
that corresponds to the foreign key constraint.
A foreign key constraint on a stored generated column cannot
use CASCADE, SET NULL,
or SET DEFAULT as ON
UPDATE referential actions, nor can it use
SET NULL or SET DEFAULT
as ON DELETE referential actions.
A foreign key constraint on the base column of a stored
generated column cannot use CASCADE,
SET NULL, or SET DEFAULT
as ON UPDATE or ON
DELETE referential actions.
This simple example relates parent and
child tables through a single-column
foreign key:
CREATE TABLE parent (
id INT NOT NULL,
PRIMARY KEY (id)
) ENGINE=INNODB;
CREATE TABLE child (
id INT,
parent_id INT,
INDEX par_ind (parent_id),
FOREIGN KEY (parent_id)
REFERENCES parent(id)
ON DELETE CASCADE
) ENGINE=INNODB;
This is a more complex example in which a
product_order table has foreign keys for
two other tables. One foreign key references a two-column
index in the product table. The other
references a single-column index in the
customer table:
CREATE TABLE product (
category INT NOT NULL, id INT NOT NULL,
price DECIMAL,
PRIMARY KEY(category, id)
) ENGINE=INNODB;
CREATE TABLE customer (
id INT NOT NULL,
PRIMARY KEY (id)
) ENGINE=INNODB;
CREATE TABLE product_order (
no INT NOT NULL AUTO_INCREMENT,
product_category INT NOT NULL,
product_id INT NOT NULL,
customer_id INT NOT NULL,
PRIMARY KEY(no),
INDEX (product_category, product_id),
INDEX (customer_id),
FOREIGN KEY (product_category, product_id)
REFERENCES product(category, id)
ON UPDATE CASCADE ON DELETE RESTRICT,
FOREIGN KEY (customer_id)
REFERENCES customer(id)
) ENGINE=INNODB;
You can add a foreign key constraint to an existing table
using the following ALTER TABLE
syntax:
ALTER TABLEtbl_nameADD [CONSTRAINT [symbol]] FOREIGN KEY [index_name] (col_name, ...) REFERENCEStbl_name(col_name,...) [ON DELETEreference_option] [ON UPDATEreference_option]
The foreign key can be self referential (referring to the same
table). When you add a foreign key constraint to a table using
ALTER TABLE, remember
to first create an index on the column(s) referenced by the
foreign key.
You can drop a foreign key constraint using the following
ALTER TABLE syntax:
ALTER TABLEtbl_nameDROP FOREIGN KEYfk_symbol;
If the FOREIGN KEY clause defined a
CONSTRAINT name when you created the
constraint, you can refer to that name to drop the foreign key
constraint. Otherwise, a constraint name was generated
internally, and you must use that value. To determine the
foreign key constraint name, use SHOW
CREATE TABLE:
mysql>SHOW CREATE TABLE child\G*************************** 1. row *************************** Table: child Create Table: CREATE TABLE `child` ( `id` int DEFAULT NULL, `parent_id` int DEFAULT NULL, KEY `par_ind` (`parent_id`), CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_id`) REFERENCES `parent` (`id`) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci mysql>ALTER TABLE child DROP FOREIGN KEY `child_ibfk_1`;
Adding and dropping a foreign key in the same
ALTER TABLE statement is
supported for
ALTER TABLE ...
ALGORITHM=INPLACE. It is not supported for
ALTER TABLE ...
ALGORITHM=COPY.
Foreign key checking is controlled by the
foreign_key_checks variable,
which is enabled by default. Typically, you leave this
variable enabled during normal operation to enforce
referential integrity. The
foreign_key_checks variable
has the same effect on NDB tables
as it does for InnoDB tables.
The foreign_key_checks
variable is dynamic and supports both global and session
scopes. For information about using system variables, see
Section 5.1.9, “Using System Variables”.
Disabling foreign key checking is useful when:
Dropping a table that is referenced by a foreign key
constraint. A referenced table can only be dropped after
foreign_key_checks is
disabled. When you drop a table, constraints defined on
the table are also dropped.
Reloading tables in different order than required by their
foreign key relationships. For example,
mysqldump produces correct definitions
of tables in the dump file, including foreign key
constraints for child tables. To make it easier to reload
dump files for tables with foreign key relationships,
mysqldump automatically includes a
statement in the dump output that disables
foreign_key_checks. This
enables you to import the tables in any order in case the
dump file contains tables that are not correctly ordered
for foreign keys. Disabling
foreign_key_checks also
speeds up the import operation by avoiding foreign key
checks.
Executing LOAD DATA
operations, to avoid foreign key checking.
Performing an ALTER TABLE
operation on a table that has a foreign key relationship.
When foreign_key_checks is
disabled, foreign key constraints are ignored, with the
following exceptions:
Recreating a table that was previously dropped returns an error if the table definition does not conform to the foreign key constraints that reference the table. The table must have the correct column names and types. It must also have indexes on the referenced keys. If these requirements are not satisfied, MySQL returns Error 1005 that refers to errno: 150 in the error message, which means that a foreign key constraint was not correctly formed.
Altering a table returns an error (errno: 150) if a foreign key definition is incorrectly formed for the altered table.
Dropping an index required by a foreign key constraint. The foreign key constraint must be removed before dropping the index.
Creating a foreign key constraint where a column references a nonmatching column type.
Disabling foreign_key_checks
has these additional implications:
It is permitted to drop a database that contains tables with foreign keys that are referenced by tables outside the database.
It is permitted to drop a table with foreign keys referenced by other tables.
Enabling
foreign_key_checks does
not trigger a scan of table data, which means that rows
added to a table while
foreign_key_checks is
disabled are not checked for consistency when
foreign_key_checks is
re-enabled.
MySQL extends metadata locks, as necessary, to tables that are related by a foreign key constraint. Extending metadata locks prevents conflicting DML and DDL operations from executing concurrently on related tables. This feature also enables updates to foreign key metadata when a parent table is modified. In earlier MySQL releases, foreign key metadata, which is owned by the child table, could not be updated safely.
If a table is locked explicitly with LOCK
TABLES, any tables related by a foreign key
constraint are opened and locked implicitly. For foreign key
checks, a shared read-only lock
(LOCK TABLES
READ) is taken on related tables. For cascading
updates, a shared-nothing write lock
(LOCK TABLES
WRITE) is taken on related tables that are involved
in the operation.
To view a foreign key definition, use
SHOW CREATE TABLE:
mysql> SHOW CREATE TABLE child\G
*************************** 1. row ***************************
Table: child
Create Table: CREATE TABLE `child` (
`id` int DEFAULT NULL,
`parent_id` int DEFAULT NULL,
KEY `par_ind` (`parent_id`),
CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_id`)
REFERENCES `parent` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
You can obtain information about foreign keys from the
INFORMATION_SCHEMA.KEY_COLUMN_USAGE
table. An example of a query against this table is shown here:
mysql>SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAMEFROM INFORMATION_SCHEMA.KEY_COLUMN_USAGEWHERE REFERENCED_TABLE_SCHEMA IS NOT NULL;+--------------+------------+-------------+-----------------+ | TABLE_SCHEMA | TABLE_NAME | COLUMN_NAME | CONSTRAINT_NAME | +--------------+------------+-------------+-----------------+ | test | child | parent_id | child_ibfk_1 | +--------------+------------+-------------+-----------------+
You can obtain information specific to
InnoDB foreign keys from the
INNODB_FOREIGN and
INNODB_FOREIGN_COLS tables.
Example queries are show here:
mysql>SELECT * FROM INFORMATION_SCHEMA.INNODB_FOREIGN \G*************************** 1. row *************************** ID: test/child_ibfk_1 FOR_NAME: test/child REF_NAME: test/parent N_COLS: 1 TYPE: 1 mysql>SELECT * FROM INFORMATION_SCHEMA.INNODB_FOREIGN_COLS \G*************************** 1. row *************************** ID: test/child_ibfk_1 FOR_COL_NAME: parent_id REF_COL_NAME: id POS: 0
In the event of a foreign key error involving
InnoDB tables (usually Error 150 in the
MySQL Server), information about the latest foreign key error
can be obtained by checking
SHOW ENGINE
INNODB STATUS output.
mysql> SHOW ENGINE INNODB STATUS\G ... ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 2018-04-12 14:57:24 0x7f97a9c91700 Transaction: TRANSACTION 7717, ACTIVE 0 sec inserting mysql tables in use 1, locked 1 4 lock struct(s), heap size 1136, 3 row lock(s), undo log entries 3 MySQL thread id 8, OS thread handle 140289365317376, query id 14 localhost root update INSERT INTO child VALUES (NULL, 1), (NULL, 2), (NULL, 3), (NULL, 4), (NULL, 5), (NULL, 6) Foreign key constraint fails for table `test`.`child`: , CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_id`) REFERENCES `parent` (`id`) ON DELETE CASCADE ON UPDATE CASCADE Trying to add in child table, in index par_ind tuple: DATA TUPLE: 2 fields; 0: len 4; hex 80000003; asc ;; 1: len 4; hex 80000003; asc ;; But in parent table `test`.`parent`, in index PRIMARY, the closest match we can find is record: PHYSICAL RECORD: n_fields 3; compact format; info bits 0 0: len 4; hex 80000004; asc ;; 1: len 6; hex 000000001e19; asc ;; 2: len 7; hex 81000001110137; asc 7;; ...
If a user has table-level privileges for all parent tables,
ER_NO_REFERENCED_ROW_2 and
ER_ROW_IS_REFERENCED_2 error
messages for foreign key operations expose information about
parent tables. If a user does not have table-level
privileges for all parent tables, more generic error
messages are displayed instead
(ER_NO_REFERENCED_ROW and
ER_ROW_IS_REFERENCED).
An exception is that, for stored programs defined to execute
with DEFINER privileges, the user against
which privileges are assessed is the user in the program
DEFINER clause, not the invoking user. If
that user has table-level parent table privileges, parent
table information is still displayed. In this case, it is
the responsibility of the stored program creator to hide the
information by including appropriate condition handlers.
Prior to MySQL 8.0.16, CREATE
TABLE permits only the following limited version of
table CHECK constraint syntax, which is
parsed and ignored:
CHECK (expr)
As of MySQL 8.0.16, CREATE TABLE
permits the core features of table and column
CHECK constraints, for all storage engines.
CREATE TABLE permits the
following CHECK constraint syntax, for both
table constraints and column constraints:
[CONSTRAINT [symbol]] CHECK (expr) [[NOT] ENFORCED]
The optional symbol specifies a name
for the constraint. If omitted, MySQL generates a name from the
table name, a literal _chk_, and an ordinal
number (1, 2, 3, ...). Constraint names have a maximum length of
64 characters. They are case sensitive, but not accent
sensitive.
expr specifies the constraint
condition as a boolean expression that must evaluate to
TRUE or UNKNOWN (for
NULL values) for each row of the table. If
the condition evaluates to FALSE, it fails
and a constraint violation occurs. The effect of a violation
depends on the statement being executed, as described later in
this section.
The optional enforcement clause indicates whether the constraint is enforced:
If omitted or specified as ENFORCED, the
constraint is created and enforced.
If specified as NOT ENFORCED, the
constraint is created but not enforced.
A CHECK constraint is specified as either a
table constraint or column constraint:
A table constraint does not appear within a column definition and can refer to any table column or columns. Forward references are permitted to columns appearing later in the table definition.
A column constraint appears within a column definition and can refer only to that column.
Consider this table definition:
CREATE TABLE t1 ( CHECK (c1 <> c2), c1 INT CHECK (c1 > 10), c2 INT CONSTRAINT c2_positive CHECK (c2 > 0), c3 INT CHECK (c3 < 100), CONSTRAINT c1_nonzero CHECK (c1 <> 0), CHECK (c1 > c3) );
The definition includes table constraints and column constraints, in named and unnamed formats:
The first constraint is a table constraint: It occurs outside any column definition, so it can (and does) refer to multiple table columns. This constraint contains forward references to columns not defined yet. No constraint name is specified, so MySQL generates a name.
The next three constraints are column constraints: Each occurs within a column definition, and thus can refer only to the column being defined. One of the constraints is named explicitly. MySQL generates a name for each of the other two.
The last two constraints are table constraints. One of them is named explicitly. MySQL generates a name for the other one.
As mentioned, MySQL generates a name for any
CHECK constraint specified without one. To
see the names generated for the preceding table definition, use
SHOW CREATE TABLE:
mysql> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`c1` int(11) DEFAULT NULL,
`c2` int(11) DEFAULT NULL,
`c3` int(11) DEFAULT NULL,
CONSTRAINT `c1_nonzero` CHECK ((`c1` <> 0)),
CONSTRAINT `c2_positive` CHECK ((`c2` > 0)),
CONSTRAINT `t1_chk_1` CHECK ((`c1` <> `c2`)),
CONSTRAINT `t1_chk_2` CHECK ((`c1` > 10)),
CONSTRAINT `t1_chk_3` CHECK ((`c3` < 100)),
CONSTRAINT `t1_chk_4` CHECK ((`c1` > `c3`))
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
The SQL standard specifies that all types of constraints
(primary key, unique index, foreign key, check) belong to the
same namespace. In MySQL, each constraint type has its own
namespace per schema (database). Consequently,
CHECK constraint names must be unique per
schema; no two tables in the same schema can share a
CHECK constraint name. (Exception: A
TEMPORARY table hides a
non-TEMPORARY table of the same name, so it
can have the same CHECK constraint names as
well.)
Beginning generated constraint names with the table name helps ensure schema uniqueness because table names also must be unique within the schema.
CHECK condition expressions must adhere to
the following rules. An error occurs if an expression contains
disallowed constructs.
Nongenerated and generated columns are permitted, except
columns with the AUTO_INCREMENT attribute
and columns in other tables.
Literals, deterministic built-in functions, and operators
are permitted. A function is deterministic if, given the
same data in tables, multiple invocations produce the same
result, independently of the connected user. Examples of
functions that are nondeterministic and fail this
definition: CONNECTION_ID(),
CURRENT_USER(),
NOW().
Stored functions and user-defined functions are not permitted.
Stored procedure and function parameters are not permitted.
Variables (system variables, user-defined variables, and stored program local variables) are not permitted.
Subqueries are not permitted.
Foreign key referential actions (ON UPDATE,
ON DELETE) are prohibited on columns used in
CHECK constraints. Likewise,
CHECK constraints are prohibited on columns
used in foreign key referential actions.
CHECK constraints are evaluated for
INSERT,
UPDATE,
REPLACE,
LOAD DATA, and
LOAD XML statements and an error
occurs if a constraint evaluates to FALSE. If
an error occurs, handling of changes already applied differs for
transactional and nontransactional storage engines, and also
depends on whether strict SQL mode is in effect, as described in
Strict SQL Mode.
CHECK constraints are evaluated for
INSERT IGNORE,
UPDATE IGNORE,
LOAD DATA ...
IGNORE, and
LOAD XML ...
IGNORE statements and a warning occurs if a constraint
evaluates to FALSE. The insert or update for
any offending row is skipped.
If the constraint expression evaluates to a data type that differs from the declared column type, implicit coercion to the declared type occurs according to the usual MySQL type-conversion rules. See Section 12.2, “Type Conversion in Expression Evaluation”. If type conversion fails or results in a loss of precision, an error occurs.
Constraint expression evaluation uses the SQL mode in effect at evaluation time. If any component of the expression depends on the SQL mode, different results may occur for different uses of the table unless the SQL mode is the same during all uses.
In some cases, MySQL silently changes column specifications from
those given in a CREATE TABLE or
ALTER TABLE statement. These
might be changes to a data type, to attributes associated with a
data type, or to an index specification.
All changes are subject to the internal row-size limit of 65,535 bytes, which may cause some attempts at data type changes to fail. See Section 8.4.7, “Limits on Table Column Count and Row Size”.
Columns that are part of a PRIMARY KEY
are made NOT NULL even if not declared
that way.
Trailing spaces are automatically deleted from
ENUM and
SET member values when the
table is created.
MySQL maps certain data types used by other SQL database vendors to MySQL types. See Section 11.9, “Using Data Types from Other Database Engines”.
If you include a USING clause to specify
an index type that is not permitted for a given storage
engine, but there is another index type available that the
engine can use without affecting query results, the engine
uses the available type.
If strict SQL mode is not enabled, a
VARCHAR column with a length
specification greater than 65535 is converted to
TEXT, and a
VARBINARY column with a
length specification greater than 65535 is converted to
BLOB. Otherwise, an error
occurs in either of these cases.
Specifying the CHARACTER SET binary
attribute for a character data type causes the column to be
created as the corresponding binary data type:
CHAR becomes
BINARY,
VARCHAR becomes
VARBINARY, and
TEXT becomes
BLOB. For the
ENUM and
SET data types, this does not
occur; they are created as declared. Suppose that you
specify a table using this definition:
CREATE TABLE t
(
c1 VARCHAR(10) CHARACTER SET binary,
c2 TEXT CHARACTER SET binary,
c3 ENUM('a','b','c') CHARACTER SET binary
);The resulting table has this definition:
CREATE TABLE t
(
c1 VARBINARY(10),
c2 BLOB,
c3 ENUM('a','b','c') CHARACTER SET binary
);
To see whether MySQL used a data type other than the one you
specified, issue a DESCRIBE or
SHOW CREATE TABLE statement after
creating or altering the table.
Certain other data type changes can occur if you compress a table using myisampack. See Section 16.2.3.3, “Compressed Table Characteristics”.
CREATE TABLE supports the
specification of generated columns. Values of a generated column
are computed from an expression included in the column
definition.
Generated columns are also supported by the
NDB storage engine.
The following simple example shows a table that stores the
lengths of the sides of right triangles in the
sidea and sideb columns,
and computes the length of the hypotenuse in
sidec (the square root of the sums of the
squares of the other sides):
CREATE TABLE triangle ( sidea DOUBLE, sideb DOUBLE, sidec DOUBLE AS (SQRT(sidea * sidea + sideb * sideb)) ); INSERT INTO triangle (sidea, sideb) VALUES(1,1),(3,4),(6,8);
Selecting from the table yields this result:
mysql> SELECT * FROM triangle;
+-------+-------+--------------------+
| sidea | sideb | sidec |
+-------+-------+--------------------+
| 1 | 1 | 1.4142135623730951 |
| 3 | 4 | 5 |
| 6 | 8 | 10 |
+-------+-------+--------------------+
Any application that uses the triangle table
has access to the hypotenuse values without having to specify
the expression that calculates them.
Generated column definitions have this syntax:
col_namedata_type[GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED] [NOT NULL | NULL] [UNIQUE [KEY]] [[PRIMARY] KEY] [COMMENT 'string']
AS (
indicates that the column is generated and defines the
expression used to compute column values. expr)AS
may be preceded by GENERATED ALWAYS to make
the generated nature of the column more explicit. Constructs
that are permitted or prohibited in the expression are discussed
later.
The VIRTUAL or STORED
keyword indicates how column values are stored, which has
implications for column use:
VIRTUAL: Column values are not stored,
but are evaluated when rows are read, immediately after any
BEFORE triggers. A virtual column takes
no storage.
InnoDB supports secondary indexes on
virtual columns. See
Section 13.1.20.9, “Secondary Indexes and Generated Columns”.
STORED: Column values are evaluated and
stored when rows are inserted or updated. A stored column
does require storage space and can be indexed.
The default is VIRTUAL if neither keyword is
specified.
It is permitted to mix VIRTUAL and
STORED columns within a table.
Other attributes may be given to indicate whether the column is
indexed or can be NULL, or provide a comment.
Generated column expressions must adhere to the following rules. An error occurs if an expression contains disallowed constructs.
Literals, deterministic built-in functions, and operators
are permitted. A function is deterministic if, given the
same data in tables, multiple invocations produce the same
result, independently of the connected user. Examples of
functions that are nondeterministic and fail this
definition: CONNECTION_ID(),
CURRENT_USER(),
NOW().
Stored functions and user-defined functions are not permitted.
Stored procedure and function parameters are not permitted.
Variables (system variables, user-defined variables, and stored program local variables) are not permitted.
Subqueries are not permitted.
A generated column definition can refer to other generated columns, but only those occurring earlier in the table definition. A generated column definition can refer to any base (nongenerated) column in the table whether its definition occurs earlier or later.
The AUTO_INCREMENT attribute cannot be
used in a generated column definition.
An AUTO_INCREMENT column cannot be used
as a base column in a generated column definition.
If expression evaluation causes truncation or provides
incorrect input to a function, the
CREATE TABLE statement
terminates with an error and the DDL operation is rejected.
If the expression evaluates to a data type that differs from the declared column type, implicit coercion to the declared type occurs according to the usual MySQL type-conversion rules. See Section 12.2, “Type Conversion in Expression Evaluation”.
Expression evaluation uses the SQL mode in effect at evaluation time. If any component of the expression depends on the SQL mode, different results may occur for different uses of the table unless the SQL mode is the same during all uses.
For CREATE
TABLE ... LIKE, the destination table preserves
generated column information from the original table.
For CREATE
TABLE ... SELECT, the destination table does not
preserve information about whether columns in the selected-from
table are generated columns. The
SELECT part of the statement
cannot assign values to generated columns in the destination
table.
Partitioning by generated columns is permitted. See Table Partitioning.
A foreign key constraint on a stored generated column cannot use
CASCADE, SET NULL, or
SET DEFAULT as ON UPDATE
referential actions, nor can it use SET NULL
or SET DEFAULT as ON
DELETE referential actions.
A foreign key constraint on the base column of a stored
generated column cannot use CASCADE,
SET NULL, or SET DEFAULT
as ON UPDATE or ON DELETE
referential actions.
A foreign key constraint cannot reference a virtual generated column.
Triggers cannot use
NEW. or
use col_nameOLD.
to refer to generated columns.
col_name
For INSERT,
REPLACE, and
UPDATE, if a generated column is
inserted into, replaced, or updated explicitly, the only
permitted value is DEFAULT.
A generated column in a view is considered updatable because it
is possible to assign to it. However, if such a column is
updated explicitly, the only permitted value is
DEFAULT.
Generated columns have several use cases, such as these:
Virtual generated columns can be used as a way to simplify and unify queries. A complicated condition can be defined as a generated column and referred to from multiple queries on the table to ensure that all of them use exactly the same condition.
Stored generated columns can be used as a materialized cache for complicated conditions that are costly to calculate on the fly.
Generated columns can simulate functional indexes: Use a
generated column to define a functional expression and index
it. This can be useful for working with columns of types
that cannot be indexed directly, such as
JSON columns; see
Indexing a Generated Column to Provide a JSON Column Index, for a detailed
example.
For stored generated columns, the disadvantage of this approach is that values are stored twice; once as the value of the generated column and once in the index.
If a generated column is indexed, the optimizer recognizes query expressions that match the column definition and uses indexes from the column as appropriate during query execution, even if a query does not refer to the column directly by name. For details, see Section 8.3.11, “Optimizer Use of Generated Column Indexes”.
Example:
Suppose that a table t1 contains
first_name and last_name
columns and that applications frequently construct the full name
using an expression like this:
SELECT CONCAT(first_name,' ',last_name) AS full_name FROM t1;
One way to avoid writing out the expression is to create a view
v1 on t1, which simplifies
applications by enabling them to select
full_name directly without using an
expression:
CREATE VIEW v1 AS SELECT *, CONCAT(first_name,' ',last_name) AS full_name FROM t1; SELECT full_name FROM v1;
A generated column also enables applications to select
full_name directly without the need to define
a view:
CREATE TABLE t1 ( first_name VARCHAR(10), last_name VARCHAR(10), full_name VARCHAR(255) AS (CONCAT(first_name,' ',last_name)) ); SELECT full_name FROM t1;
InnoDB supports secondary indexes on virtual
generated columns. Other index types are not supported. A
secondary index defined on a virtual column is sometimes
referred to as a “virtual index”.
A secondary index may be created on one or more virtual columns
or on a combination of virtual columns and regular columns or
stored generated columns. Secondary indexes that include virtual
columns may be defined as UNIQUE.
When a secondary index is created on a virtual generated column, generated column values are materialized in the records of the index. If the index is a covering index (one that includes all the columns retrieved by a query), generated column values are retrieved from materialized values in the index structure instead of computed “on the fly”.
There are additional write costs to consider when using a
secondary index on a virtual column due to computation performed
when materializing virtual column values in secondary index
records during INSERT and
UPDATE operations. Even with
additional write costs, secondary indexes on virtual columns may
be preferable to generated stored columns,
which are materialized in the clustered index, resulting in
larger tables that require more disk space and memory. If a
secondary index is not defined on a virtual column, there are
additional costs for reads, as virtual column values must be
computed each time the column's row is examined.
Values of an indexed virtual column are MVCC-logged to avoid
unnecessary recomputation of generated column values during
rollback or during a purge operation. The data length of logged
values is limited by the index key limit of 767 bytes for
COMPACT and REDUNDANT row
formats, and 3072 bytes for DYNAMIC and
COMPRESSED row formats.
Adding or dropping a secondary index on a virtual column is an in-place operation.
As noted elsewhere, JSON
columns cannot be indexed directly. To create an index that
references such a column indirectly, you can define a
generated column that extracts the information that should be
indexed, then create an index on the generated column, as
shown in this example:
mysql>CREATE TABLE jemp (->c JSON,->g INT GENERATED ALWAYS AS (c->"$.id"),->INDEX i (g)->);Query OK, 0 rows affected (0.28 sec) mysql>INSERT INTO jemp (c) VALUES>('{"id": "1", "name": "Fred"}'), ('{"id": "2", "name": "Wilma"}'),>('{"id": "3", "name": "Barney"}'), ('{"id": "4", "name": "Betty"}');Query OK, 4 rows affected (0.04 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql>SELECT c->>"$.name" AS name>FROM jemp WHERE g > 2;+--------+ | name | +--------+ | Barney | | Betty | +--------+ 2 rows in set (0.00 sec) mysql>EXPLAIN SELECT c->>"$.name" AS name>FROM jemp WHERE g > 2\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jemp partitions: NULL type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 2 filtered: 100.00 Extra: Using where 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jemp`.`c`,'$.name')) AS `name` from `test`.`jemp` where (`test`.`jemp`.`g` > 2) 1 row in set (0.00 sec)
(We have wrapped the output from the last statement in this example to fit the viewing area.)
When you use EXPLAIN on a
SELECT or other SQL statement
containing one or more expressions that use the
-> or ->>
operator, these expressions are translated into their
equivalents using JSON_EXTRACT() and (if
needed) JSON_UNQUOTE() instead, as shown
here in the output from SHOW
WARNINGS immediately following this
EXPLAIN statement:
mysql>EXPLAIN SELECT c->>"$.name">FROM jemp WHERE g > 2 ORDER BY c->"$.name"\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jemp partitions: NULL type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 2 filtered: 100.00 Extra: Using where; Using filesort 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jemp`.`c`,'$.name')) AS `c->>"$.name"` from `test`.`jemp` where (`test`.`jemp`.`g` > 2) order by json_extract(`test`.`jemp`.`c`,'$.name') 1 row in set (0.00 sec)
See the descriptions of the
->
and
->>
operators, as well as those of the
JSON_EXTRACT() and
JSON_UNQUOTE() functions, for
additional information and examples.
This technique also can be used to provide indexes that
indirectly reference columns of other types that cannot be
indexed directly, such as GEOMETRY columns.
In MySQL 8.0.21 and later, it is also possible to create an
index on a JSON column using
the JSON_VALUE() function with
an expression that can be used to optimize queries employing
the expression. See the description of that function for more
information and examples.
It is also possible to use indirect indexing of JSON columns in MySQL NDB Cluster, subject to the following conditions:
NDB handles a
JSON column value
internally as a BLOB. This
means that any NDB table having one or
more JSON columns must have a primary key, else it cannot
be recorded in the binary log.
The NDB storage engine does
not support indexing of virtual columns. Since the default
for generated columns is VIRTUAL, you
must specify explicitly the generated column to which to
apply the indirect index as STORED.
The CREATE TABLE statement
used to create the table jempn shown here
is a version of the jemp table shown
previously, with modifications making it compatible with
NDB:
CREATE TABLE jempn ( a BIGINT(20) NOT NULL AUTO_INCREMENT PRIMARY KEY, c JSON DEFAULT NULL, g INT GENERATED ALWAYS AS (c->"$.name") STORED, INDEX i (g) ) ENGINE=NDB;
We can populate this table using the following
INSERT statement:
INSERT INTO jempn (a, c) VALUES
(NULL, '{"id": "1", "name": "Fred"}'),
(NULL, '{"id": "2", "name": "Wilma"}'),
(NULL, '{"id": "3", "name": "Barney"}'),
(NULL, '{"id": "4", "name": "Betty"}');
Now NDB can use index i,
as shown here:
mysql>EXPLAIN SELECT c->>"$.name" AS name FROM jempn WHERE g > 2\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: jempn partitions: p0,p1 type: range possible_keys: i key: i key_len: 5 ref: NULL rows: 3 filtered: 100.00 Extra: Using where with pushed condition (`test`.`jempn`.`g` > 2) 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS\G*************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select json_unquote(json_extract(`test`.`jempn`.`c`,'$.name')) AS `name` from `test`.`jempn` where (`test`.`jempn`.`g` > 2) 1 row in set (0.00 sec)
You should keep in mind that a stored generated column uses
DataMemory, and that
an index on such a column uses
IndexMemory.
In MySQL NDB Cluster, the table comment in a CREATE
TABLE or ALTER TABLE
statement can also be used to specify an
NDB_TABLE option, which consists of one or
more name-value pairs, separated by commas if need be, following
the string NDB_TABLE=. Complete syntax for
names and values syntax is shown here:
COMMENT="NDB_TABLE=ndb_table_option[,ndb_table_option[,...]]"ndb_table_option: { NOLOGGING={1 | 0} | READ_BACKUP={1 | 0} | PARTITION_BALANCE={FOR_RP_BY_NODE | FOR_RA_BY_NODE | FOR_RP_BY_LDM | FOR_RA_BY_LDM | FOR_RA_BY_LDM_X_2 | FOR_RA_BY_LDM_X_3 | FOR_RA_BY_LDM_X_4} | FULLY_REPLICATED={1 | 0} }
Spaces are not permitted within the quoted string. The string is case-insensitive.
The four NDB table options that can be set as
part of a comment in this way are described in more detail in
the next few paragraphs.
NOLOGGING: Using 1 corresponds to having
ndb_table_no_logging enabled,
but has no actual effect. Provided as a placeholder, mostly for
completeness of ALTER TABLE
statements.
READ_BACKUP: Setting this option to 1 has the
same effect as though
ndb_read_backup were enabled;
enables reading from any replica. Doing so greatly improves the
performance of reads from the table at a relatively small cost
to write performance. Beginning with NDB 8.0.19, 1 is the
default for READ_BACKUP, and the default for
ndb_read_backup is
ON (previously, read from any replica was
disabled by default).
You can set READ_BACKUP for an existing table
online, using an ALTER TABLE statement
similar to one of those shown here:
ALTER TABLE ... ALGORITHM=INPLACE, COMMENT="NDB_TABLE=READ_BACKUP=1"; ALTER TABLE ... ALGORITHM=INPLACE, COMMENT="NDB_TABLE=READ_BACKUP=0";
For more information about the ALGORITHM
option for ALTER TABLE, see
Section 22.5.11, “Online Operations with ALTER TABLE in NDB Cluster”.
PARTITION_BALANCE: Provides additional
control over assignment and placement of partitions. The
following four schemes are supported:
FOR_RP_BY_NODE: One partition per node.
Only one LDM on each node stores a primary partition. Each partition is stored in the same LDM (same ID) on all nodes.
FOR_RA_BY_NODE: One partition per node
group.
Each node stores a single partition, which can be either a primary replica or a backup replica. Each partition is stored in the same LDM on all nodes.
FOR_RP_BY_LDM: One partition for each LDM
on each node; the default.
This is the setting used if READ_BACKUP
is set to 1.
FOR_RA_BY_LDM: One partition per LDM in
each node group.
These partitions can be primary or backup partitions.
FOR_RA_BY_LDM_X_2: Two partitions per LDM
in each node group.
These partitions can be primary or backup partitions.
FOR_RA_BY_LDM_X_3: Three partitions per
LDM in each node group.
These partitions can be primary or backup partitions.
FOR_RA_BY_LDM_X_4: Four partitions per
LDM in each node group.
These partitions can be primary or backup partitions.
PARTITION_BALANCE is the preferred interface
for setting the number of partitions per table. Using
MAX_ROWS to force the number of partitions is
deprecated but continues to be supported for backward
compatibility; it is subject to removal in a future release of
MySQL NDB Cluster. (Bug #81759, Bug #23544301)
FULLY_REPLICATED controls whether the table
is fully replicated, that is, whether each data node has a
complete copy of the table. To enable full replication of the
table, use FULLY_REPLICATED=1.
This setting can also be controlled using the
ndb_fully_replicated system variable. Setting
it to ON enables the option by default for
all new NDB tables; the default is
OFF. The
ndb_data_node_neighbour system
variable is also used for fully replicated tables, to ensure
that when a fully replicated table is accessed, we access the
data node which is local to this MySQL Server.
An example of a CREATE TABLE statement using
such a comment when creating an NDB table is
shown here:
mysql>CREATE TABLE t1 (>c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY,>c2 VARCHAR(100),>c3 VARCHAR(100) )>ENGINE=NDB>COMMENT="NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE";
The comment is displayed as part of the ouput of
SHOW CREATE TABLE. The text of
the comment is also available from querying the MySQL
Information Schema TABLES table, as
in this example:
mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1"\G*************************** 1. row *************************** TABLE_NAME: t1 TABLE_SCHEMA: test TABLE_COMMENT: NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE 1 row in set (0.01 sec)
This comment syntax is also supported with
ALTER TABLE statements for
NDB tables, as shown here:
mysql> ALTER TABLE t1 COMMENT="NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE";
Query OK, 0 rows affected (0.40 sec)
Records: 0 Duplicates: 0 Warnings: 0
Beginning with NDB 8.0.21, the TABLE_COMMENT
column displays the comment that is required to re-create the
table as it is following the ALTER TABLE
statement, like this:
mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT->FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1"\G*************************** 1. row *************************** TABLE_NAME: t1 TABLE_SCHEMA: test TABLE_COMMENT: NDB_TABLE=READ_BACKUP=0,PARTITION_BALANCE=FOR_RP_BY_NODE 1 row in set (0.01 sec)
mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";+------------+--------------+--------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+--------------------------------------------------+ | t1 | c | NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE | | t1 | d | | +------------+--------------+--------------------------------------------------+ 2 rows in set (0.01 sec)
Keep in mind that a table comment used with ALTER
TABLE replaces any existing comment which the table
might have.
mysql>ALTER TABLE t1 COMMENT="NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE";Query OK, 0 rows affected (0.40 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SELECT TABLE_NAME, TABLE_SCHEMA, TABLE_COMMENT>FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME="t1";+------------+--------------+--------------------------------------------------+ | TABLE_NAME | TABLE_SCHEMA | TABLE_COMMENT | +------------+--------------+--------------------------------------------------+ | t1 | c | NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_NODE | | t1 | d | | +------------+--------------+--------------------------------------------------+ 2 rows in set (0.01 sec)
Prior to NDB 8.0.21, the table comment used with ALTER
TABLE replaced any existing comment which the table
might have had. This meant that (for example) the
READ_BACKUP value was not carried over to the
new comment set by the ALTER TABLE statement,
and that any unspecified values reverted to their defaults.
(BUG#30428829) There was thus no longer any way using SQL to
retrieve the value previously set for the comment. To keep
comment values from reverting to their defaults, it was necessry
to preserve any such values from the existing comment string and
include them in the comment passed to ALTER
TABLE.
You can also see the value of the
PARTITION_BALANCE option in the output of
ndb_desc. ndb_desc also
shows whether the READ_BACKUP and
FULLY_REPLICATED options are set for the
table. See the description of this program for more information.
CREATE [UNDO] TABLESPACEtablespace_nameInnoDB and NDB: [ADD DATAFILE 'file_name'] InnoDB only: [FILE_BLOCK_SIZE = value] [ENCRYPTION [=] {'Y' | 'N'}] NDB only: USE LOGFILE GROUPlogfile_group[EXTENT_SIZE [=]extent_size] [INITIAL_SIZE [=]initial_size] [AUTOEXTEND_SIZE [=]autoextend_size] [MAX_SIZE [=]max_size] [NODEGROUP [=]nodegroup_id] [WAIT] [COMMENT [=] 'string'] InnoDB and NDB: [ENGINE [=]engine_name]
This statement is used to create a tablespace. The precise syntax
and semantics depend on the storage engine used. In standard MySQL
releases, this is always an InnoDB
tablespace. MySQL NDB Cluster also supports tablespaces using the
NDB storage engine.
CREATE TABLESPACE syntax is used to
create general tablespaces or undo tablespaces. The
UNDO keyword, introduced in MySQL 8.0.14, must
be specified to create an undo tablespace.
A general tablespace is a shared tablespace. It can hold multiple tables, and supports all table row formats. General tablespaces can be created in a location relative to or independent of the data directory.
After creating an InnoDB general tablespace,
use CREATE TABLE
or
tbl_name ... TABLESPACE [=]
tablespace_nameALTER TABLE
to add tables
to the tablespace. For more information, see
Section 15.6.3.3, “General Tablespaces”.
tbl_name TABLESPACE [=]
tablespace_name
Undo tablespaces contain undo logs. Undo tablespaces can be created in a chosen location by specifying a fully qualified data file path. For more information, see Section 15.6.3.4, “Undo Tablespaces”.
This statement is used to create a tablespace, which can contain
one or more data files, providing storage space for NDB Cluster
Disk Data tables (see Section 22.5.10, “NDB Cluster Disk Data Tables”).
One data file is created and added to the tablespace using this
statement. Additional data files may be added to the tablespace by
using the ALTER TABLESPACE
statement (see Section 13.1.10, “ALTER TABLESPACE Statement”).
All NDB Cluster Disk Data objects share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group with the same name, or a tablespace and a data file with the same name.
A log file group of one or more UNDO log files
must be assigned to the tablespace to be created with the
USE LOGFILE GROUP clause.
logfile_group must be an existing log
file group created with CREATE LOGFILE
GROUP (see Section 13.1.16, “CREATE LOGFILE GROUP Statement”).
Multiple tablespaces may use the same log file group for
UNDO logging.
When setting EXTENT_SIZE or
INITIAL_SIZE, you may optionally follow the
number with a one-letter abbreviation for an order of magnitude,
similar to those used in my.cnf. Generally,
this is one of the letters M (for megabytes) or
G (for gigabytes).
INITIAL_SIZE and EXTENT_SIZE
are subject to rounding as follows:
EXTENT_SIZE is rounded up to the nearest
whole multiple of 32K.
INITIAL_SIZE is rounded
down to the nearest whole multiple of
32K; this result is rounded up to the nearest whole multiple
of EXTENT_SIZE (after any rounding).
NDB reserves 4% of a tablespace for
data node restart operations. This reserved space cannot be used
for data storage.
The rounding just described is done explicitly, and a warning is
issued by the MySQL Server when any such rounding is performed.
The rounded values are also used by the NDB kernel for calculating
INFORMATION_SCHEMA.FILES column
values and other purposes. However, to avoid an unexpected result,
we suggest that you always use whole multiples of 32K in
specifying these options.
When CREATE TABLESPACE is used with
ENGINE [=] NDB, a tablespace and associated
data file are created on each Cluster data node. You can verify
that the data files were created and obtain information about them
by querying the
INFORMATION_SCHEMA.FILES table. (See
the example later in this section.)
(See Section 25.14, “The INFORMATION_SCHEMA FILES Table”.)
ADD DATAFILE: Defines the name of a
tablespace data file. This option is always required when
creating an NDB tablespace; for
InnoDB in MySQL 8.0.14 and later, it is
required only when creating an undo tablespace. The
,
including any specified path, must be quoted with single or
double quotation marks. File names (not counting the file
extension) and directory names must be at least one byte in
length. Zero length file names and directory names are not
supported.
file_name
Because there are considerable differences in how
InnoDB and NDB treat
data files, the two storage engines are covered separately in
the discussion that follows.
InnoDB data files.
An InnoDB tablespace supports only a
single data file, whose name must include a
.ibd extension.
To place an InnoDB general tablespace data
file in a location outside of the data directory, include a
fully qualified path or a path relative to the data directory.
Only a fully qualified path is permitted for undo tablespaces.
If you do not specify a path, a general tablespace is created
in the data directory. An undo tablespace created without
specifying a path is created in the directory defined by the
innodb_undo_directory
variable. If the
innodb_undo_directory
variable is undefined, undo tablespaces are created in the
data directory.
To avoid conflicts with implicitly created file-per-table
tablespaces, creating an InnoDB general
tablespace in a subdirectory under the data directory is not
supported. When creating a general tablespace or undo
tablespace outside of the data directory, the directory must
exist and must be known to InnoDB prior to
creating the tablespace. To make a directory known to
InnoDB, add it to the
innodb_directories value or
to one of the variables whose values are appended to the
innodb_directories value.
innodb_directories is a
read-only variable. Configuring it requires restarting the
server.
If the ADD DATAFILE clause is not specified
when creating an InnoDB tablespace, a
tablespace data file with a unique file name is created
implicitly. The unique file name is a 128 bit UUID formatted
into five groups of hexadecimal numbers separated by dashes
(aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee).
A file extension is added if required by the storage engine.
An .ibd file extension is added for
InnoDB general tablespace data files. In a
replication environment, the data file name created on the
master is not the same as the data file name created on the
slave.
As of MySQL 8.0.17, the ADD DATAFILE clause
does not permit circular directory references when creating an
InnoDB tablespace. For example, the
circular directory reference (/../) in the
following statement is not permitted:
CREATE TABLESPACE ts1 ADD DATAFILE ts1.ibd 'any_directory/../ts1.ibd';
An exception to this restriction exists on Linux, where a
circular directory reference is permitted if the preceding
directory is a symbolic link. For example, the data file path
in the example above is permitted if
any_directory is a symbolic link.
(It is still permitted for data file paths to begin with
'../'.)
NDB data files.
An NDB tablespace supports multiple data
files which can have any legal file names; more data files
can be added to an NDB Cluster tablespace following its
creation by using an ALTER
TABLESPACE statement.
An NDB tablespace data file is created by
default in the data node file system directory—that is,
the directory named
ndb_
under the data node's data directory
(nodeid_fs/TSDataDir), where
nodeid is the data node's
NodeId. To place the
data file in a location other than the default, include an
absolute directory path or a path relative to the default
location. If the directory specified does not exist,
NDB attempts to create it; the system user
account under which the data node process is running must have
the appropriate permissions to do so.
When determining the path used for a data file,
NDB does not expand the
~ (tilde) character.
When multiple data nodes are run on the same physical host, the following considerations apply:
You cannot specify an absolute path when creating a data file.
It is not possible to create tablespace data files outside the data node file system directory, unless each data node has a separate data directory.
If each data node has its own data directory, data files can be created anywhere within this directory.
If each data node has its own data directory, it may also be possible to create a data file outside the node's data directory using a relative path, as long as this path resolves to a unique location on the host file system for each data node running on that host.
FILE_BLOCK_SIZE: This option—which is
specific to InnoDB general tablespaces, and
is ignored by NDB—defines the block
size for the tablespace data file. Values can be specified in
bytes or kilobytes. For example, an 8 kilobyte file block size
can be specified as 8192 or 8K. If you do not specify this
option, FILE_BLOCK_SIZE defaults to the
innodb_page_size value.
FILE_BLOCK_SIZE is required when you intend
to use the tablespace for storing compressed
InnoDB tables
(ROW_FORMAT=COMPRESSED). In this case, you
must define the tablespace FILE_BLOCK_SIZE
when creating the tablespace.
If FILE_BLOCK_SIZE is equal the
innodb_page_size value, the
tablespace can contain only tables having an uncompressed row
format (COMPACT,
REDUNDANT, and DYNAMIC).
Tables with a COMPRESSED row format have a
different physical page size than uncompressed tables.
Therefore, compressed tables cannot coexist in the same
tablespace as uncompressed tables.
For a general tablespace to contain compressed tables,
FILE_BLOCK_SIZE must be specified, and the
FILE_BLOCK_SIZE value must be a valid
compressed page size in relation to the
innodb_page_size value. Also,
the physical page size of the compressed table
(KEY_BLOCK_SIZE) must be equal to
FILE_BLOCK_SIZE/1024. For example, if
innodb_page_size=16K, and
FILE_BLOCK_SIZE=8K, the
KEY_BLOCK_SIZE of the table must be 8. For
more information, see Section 15.6.3.3, “General Tablespaces”.
USE LOGFILE GROUP: Required for
NDB, this is the name of a log file group
previously created using CREATE LOGFILE
GROUP. Not supported for InnoDB,
where it fails with an error.
EXTENT_SIZE: This option is specific to
NDB, and is not supported by InnoDB, where it fails with an
error. EXTENT_SIZE sets the size, in bytes,
of the extents used by any files belonging to the tablespace.
The default value is 1M. The minimum size is 32K, and
theoretical maximum is 2G, although the practical maximum size
depends on a number of factors. In most cases, changing the
extent size does not have any measurable effect on
performance, and the default value is recommended for all but
the most unusual situations.
An extent is a unit of
disk space allocation. One extent is filled with as much data
as that extent can contain before another extent is used. In
theory, up to 65,535 (64K) extents may used per data file;
however, the recommended maximum is 32,768 (32K). The
recommended maximum size for a single data file is
32G—that is, 32K extents × 1 MB per extent. In
addition, once an extent is allocated to a given partition, it
cannot be used to store data from a different partition; an
extent cannot store data from more than one partition. This
means, for example that a tablespace having a single datafile
whose INITIAL_SIZE (described in the
following item) is 256 MB and whose
EXTENT_SIZE is 128M has just two extents,
and so can be used to store data from at most two different
disk data table partitions.
You can see how many extents remain free in a given data file
by querying the
INFORMATION_SCHEMA.FILES table,
and so derive an estimate for how much space remains free in
the file. For further discussion and examples, see
Section 25.14, “The INFORMATION_SCHEMA FILES Table”.
INITIAL_SIZE: This option is specific to
NDB, and is not supported by
InnoDB, where it fails with an error.
The INITIAL_SIZE parameter sets the total
size in bytes of the data file that was specific using
ADD DATATFILE. Once this file has been
created, its size cannot be changed; however, you can add more
data files to the tablespace using
ALTER
TABLESPACE ... ADD DATAFILE.
INITIAL_SIZE is optional; its default value
is 134217728 (128 MB).
On 32-bit systems, the maximum supported value for
INITIAL_SIZE is 4294967296 (4 GB).
AUTOEXTEND_SIZE: Currently ignored by
MySQL; reserved for possible future use. Has no effect in any
release of MySQL 8.0 or MySQL NDB Cluster 8.0, regardless of
the storage engine used.
MAX_SIZE: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 8.0 or MySQL NDB Cluster 8.0, regardless of the
storage engine used.
NODEGROUP: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 8.0 or MySQL NDB Cluster 8.0, regardless of the
storage engine used.
WAIT: Currently ignored by MySQL; reserved
for possible future use. Has no effect in any release of MySQL
8.0 or MySQL NDB Cluster 8.0, regardless of the storage engine
used.
COMMENT: Currently ignored by MySQL;
reserved for possible future use. Has no effect in any release
of MySQL 8.0 or MySQL NDB Cluster 8.0, regardless of the
storage engine used.
The ENCRYPTION clause enables or disables
page-level data encryption for an InnoDB
general tablespace. Encryption support for general tablespaces
was introduced in MySQL 8.0.13.
As of MySQL 8.0.16, if the ENCRYPTION
clause is not specified, the
default_table_encryption
setting controls whether encryption is enabled. The
ENCRYPTION clause overrides the
default_table_encryption
setting. However, if the
table_encryption_privilege_check
variable is enabled, the
TABLE_ENCRYPTION_ADMIN
privilege is required to use an ENCRYPTION
clause setting that differs from the
default_table_encryption
setting.
A keyring plugin must be installed and configured before an encryption-enabled tablespace can be created.
When a general tablespace is encrypted, all tables residing in the tablespace are encrypted. Likewise, a table created in an encrypted tablespace is encrypted.
For more information, see Section 15.13, “InnoDB Data-at-Rest Encryption”
ENGINE: Defines the storage engine which
uses the tablespace, where
engine_name is the name of the
storage engine. Currently, only the InnoDB
storage engine is supported by standard MySQL 8.0
releases. MySQL NDB Cluster supports both
NDB and InnoDB
tablespaces. The value of the
default_storage_engine system
variable is used for ENGINE if the option
is not specified.
For the rules covering the naming of MySQL tablespaces, see
Section 9.2, “Schema Object Names”. In addition to these rules, the
slash character (“/”) is not permitted, nor can
you use names beginning with innodb_, as
this prefix is reserved for system use.
Creation of temporary general tablespaces is not supported.
General tablespaces do not support temporary tables.
The TABLESPACE option may be used with
CREATE TABLE or
ALTER TABLE to assign an
InnoDB table partition or subpartition to a
file-per-table tablespace. All partitions must belong to the
same storage engine. Assigning table partitions to shared
InnoDB tablespaces is not supported. Shared
tablespaces include the InnoDB system
tablespace and general tablespaces.
General tablespaces support the addition of tables of any row
format using
CREATE TABLE ...
TABLESPACE.
innodb_file_per_table does
not need to be enabled.
innodb_strict_mode is not
applicable to general tablespaces. Tablespace management rules
are strictly enforced independently of
innodb_strict_mode. If
CREATE TABLESPACE parameters are incorrect
or incompatible, the operation fails regardless of the
innodb_strict_mode setting.
When a table is added to a general tablespace using
CREATE TABLE ...
TABLESPACE or
ALTER TABLE ...
TABLESPACE,
innodb_strict_mode is ignored
but the statement is evaluated as if
innodb_strict_mode is
enabled.
Use DROP TABLESPACE to remove a tablespace.
All tables must be dropped from a tablespace using
DROP TABLE prior to dropping
the tablespace. Before dropping an NDB Cluster tablespace you
must also remove all its data files using one or more
ALTER
TABLESPACE ... DROP DATATFILE statements. See
Section 22.5.10.1, “NDB Cluster Disk Data Objects”.
All parts of an InnoDB table added to an
InnoDB general tablespace reside in the
general tablespace, including indexes and
BLOB pages.
For an NDB table assigned to a tablespace,
only those columns which are not indexed are stored on disk,
and actually use the tablespace data files. Indexes and
indexed columns for all NDB tables are
always kept in memory.
Similar to the system tablespace, truncating or dropping
tables stored in a general tablespace creates free space
internally in the general tablespace
.ibd data file which can
only be used for new InnoDB data. Space is
not released back to the operating system as it is for
file-per-table tablespaces.
A general tablespace is not associated with any database or schema.
ALTER TABLE ...
DISCARD TABLESPACE and
ALTER TABLE
...IMPORT TABLESPACE are not supported for tables
that belong to a general tablespace.
The server uses tablespace-level metadata locking for DDL that references general tablespaces. By comparison, the server uses table-level metadata locking for DDL that references file-per-table tablespaces.
A generated or existing tablespace cannot be changed to a general tablespace.
There is no conflict between general tablespace names and file-per-table tablespace names. The “/” character, which is present in file-per-table tablespace names, is not permitted in general tablespace names.
mysqldump and mysqlpump
do not dump InnoDB
CREATE TABLESPACE statements.
This example demonstrates creating a general tablespace and adding three uncompressed tables of different row formats.
mysql>CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' ENGINE=INNODB;mysql>CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=REDUNDANT;mysql>CREATE TABLE t2 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=COMPACT;mysql>CREATE TABLE t3 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=DYNAMIC;
This example demonstrates creating a general tablespace and adding
a compressed table. The example assumes a default
innodb_page_size value of 16K.
The FILE_BLOCK_SIZE of 8192 requires that the
compressed table have a KEY_BLOCK_SIZE of 8.
mysql>CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' FILE_BLOCK_SIZE = 8192 Engine=InnoDB;mysql>CREATE TABLE t4 (c1 INT PRIMARY KEY) TABLESPACE ts2 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
This example demonstrates creating a general tablespace without
specifying the ADD DATAFILE clause, which is
optional as of MySQL 8.0.14.
mysql> CREATE TABLESPACE `ts3` ENGINE=INNODB;
This example demonstrates creating an undo tablespace.
mysql> CREATE UNDO TABLESPACE undo_003 ADD DATAFILE 'undo_003.ibu';
Suppose that you wish to create an NDB Cluster Disk Data
tablespace named myts using a datafile named
mydata-1.dat. An NDB
tablespace always requires the use of a log file group consisting
of one or more undo log files. For this example, we first create a
log file group named mylg that contains one
undo long file named myundo-1.dat, using the
CREATE LOGFILE GROUP statement
shown here:
mysql>CREATE LOGFILE GROUP myg1->ADD UNDOFILE 'myundo-1.dat'->ENGINE=NDB;Query OK, 0 rows affected (3.29 sec)
Now you can create the tablespace previously described using the following statement:
mysql>CREATE TABLESPACE myts->ADD DATAFILE 'mydata-1.dat'->USE LOGFILE GROUP mylg->ENGINE=NDB;Query OK, 0 rows affected (2.98 sec)
You can now create a Disk Data table using a
CREATE TABLE statement with the
TABLESPACE and STORAGE DISK
options, similar to what is shown here:
mysql>CREATE TABLE mytable (->id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,->lname VARCHAR(50) NOT NULL,->fname VARCHAR(50) NOT NULL,->dob DATE NOT NULL,->joined DATE NOT NULL,->INDEX(last_name, first_name)->)->TABLESPACE myts STORAGE DISK->ENGINE=NDB;Query OK, 0 rows affected (1.41 sec)
It is important to note that only the dob and
joined columns from mytable
are actually stored on disk, due to the fact that the
id, lname, and
fname columns are all indexed.
As mentioned previously, when CREATE TABLESPACE
is used with ENGINE [=] NDB, a tablespace and
associated data file are created on each NDB Cluster data node.
You can verify that the data files were created and obtain
information about them by querying the
INFORMATION_SCHEMA.FILES table, as
shown here:
mysql>SELECT FILE_NAME, FILE_TYPE, LOGFILE_GROUP_NAME, STATUS, EXTRA->FROM INFORMATION_SCHEMA.FILES->WHERE TABLESPACE_NAME = 'myts';+--------------+------------+--------------------+--------+----------------+ | file_name | file_type | logfile_group_name | status | extra | +--------------+------------+--------------------+--------+----------------+ | mydata-1.dat | DATAFILE | mylg | NORMAL | CLUSTER_NODE=5 | | mydata-1.dat | DATAFILE | mylg | NORMAL | CLUSTER_NODE=6 | | NULL | TABLESPACE | mylg | NORMAL | NULL | +--------------+------------+--------------------+--------+----------------+ 3 rows in set (0.01 sec)
For additional information and examples, see Section 22.5.10.1, “NDB Cluster Disk Data Objects”.
CREATE
[DEFINER = user]
TRIGGER trigger_name
trigger_time trigger_event
ON tbl_name FOR EACH ROW
[trigger_order]
trigger_body
trigger_time: { BEFORE | AFTER }
trigger_event: { INSERT | UPDATE | DELETE }
trigger_order: { FOLLOWS | PRECEDES } other_trigger_name
This statement creates a new trigger. A trigger is a named
database object that is associated with a table, and that
activates when a particular event occurs for the table. The
trigger becomes associated with the table named
tbl_name, which must refer to a
permanent table. You cannot associate a trigger with a
TEMPORARY table or a view.
Trigger names exist in the schema namespace, meaning that all triggers must have unique names within a schema. Triggers in different schemas can have the same name.
This section describes CREATE
TRIGGER syntax. For additional discussion, see
Section 24.3.1, “Trigger Syntax and Examples”.
CREATE TRIGGER requires the
TRIGGER privilege for the table
associated with the trigger. If the DEFINER
clause is present, the privileges required depend on the
user value, as discussed in
Section 24.6, “Stored Object Access Control”. If binary logging is
enabled, CREATE TRIGGER might
require the SUPER privilege, as
discussed in Section 24.7, “Stored Program Binary Logging”.
The DEFINER clause determines the security
context to be used when checking access privileges at trigger
activation time, as described later in this section.
trigger_time is the trigger action
time. It can be BEFORE or
AFTER to indicate that the trigger activates
before or after each row to be modified.
Basic column value checks occur prior to trigger activation, so
you cannot use BEFORE triggers to convert
values inappropriate for the column type to valid values.
trigger_event indicates the kind of
operation that activates the trigger. These
trigger_event values are permitted:
INSERT: The trigger activates
whenever a new row is inserted into the table (for example,
through INSERT,
LOAD DATA, and
REPLACE statements).
UPDATE: The trigger activates
whenever a row is modified (for example, through
UPDATE statements).
DELETE: The trigger activates
whenever a row is deleted from the table (for example, through
DELETE and
REPLACE statements).
DROP TABLE and
TRUNCATE TABLE statements on
the table do not activate this trigger,
because they do not use DELETE.
Dropping a partition does not activate
DELETE triggers, either.
The trigger_event does not represent a
literal type of SQL statement that activates the trigger so much
as it represents a type of table operation. For example, an
INSERT trigger activates not only
for INSERT statements but also
LOAD DATA statements because both
statements insert rows into a table.
A potentially confusing example of this is the INSERT
INTO ... ON DUPLICATE KEY UPDATE ... syntax: a
BEFORE INSERT trigger activates for every row,
followed by either an AFTER INSERT trigger or
both the BEFORE UPDATE and AFTER
UPDATE triggers, depending on whether there was a
duplicate key for the row.
Cascaded foreign key actions do not activate triggers.
It is possible to define multiple triggers for a given table that
have the same trigger event and action time. For example, you can
have two BEFORE UPDATE triggers for a table. By
default, triggers that have the same trigger event and action time
activate in the order they were created. To affect trigger order,
specify a trigger_order clause that
indicates FOLLOWS or
PRECEDES and the name of an existing trigger
that also has the same trigger event and action time. With
FOLLOWS, the new trigger activates after the
existing trigger. With PRECEDES, the new
trigger activates before the existing trigger.
trigger_body is the statement to
execute when the trigger activates. To execute multiple
statements, use the
BEGIN ... END
compound statement construct. This also enables you to use the
same statements that are permitted within stored routines. See
Section 13.6.1, “BEGIN ... END Compound Statement”. Some statements are not permitted in
triggers; see Section 24.8, “Restrictions on Stored Programs”.
Within the trigger body, you can refer to columns in the subject
table (the table associated with the trigger) by using the aliases
OLD and NEW.
OLD. refers
to a column of an existing row before it is updated or deleted.
col_nameNEW. refers
to the column of a new row to be inserted or an existing row after
it is updated.
col_name
Triggers cannot use
NEW. or use
col_nameOLD. to
refer to generated columns. For information about generated
columns, see Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
col_name
MySQL stores the sql_mode system
variable setting in effect when a trigger is created, and always
executes the trigger body with this setting in force,
regardless of the current server SQL mode when the
trigger begins executing.
The DEFINER clause specifies the MySQL account
to be used when checking access privileges at trigger activation
time. If the DEFINER clause is present, the
user value should be a MySQL account
specified as
',
user_name'@'host_name'CURRENT_USER, or
CURRENT_USER(). The permitted
user values depend on the privileges
you hold, as discussed in
Section 24.6, “Stored Object Access Control”. Also see that section
for additional information about trigger security.
If the DEFINER clause is omitted, the default
definer is the user who executes the CREATE
TRIGGER statement. This is the same as specifying
DEFINER = CURRENT_USER explicitly.
MySQL takes the DEFINER user into account when
checking trigger privileges as follows:
At CREATE TRIGGER time, the
user who issues the statement must have the
TRIGGER privilege.
At trigger activation time, privileges are checked against the
DEFINER user. This user must have these
privileges:
The TRIGGER privilege for
the subject table.
The SELECT privilege for
the subject table if references to table columns occur
using
OLD.
or
col_nameNEW.
in the trigger body.
col_name
The UPDATE privilege for
the subject table if table columns are targets of
SET NEW. assignments in
the trigger body.
col_name =
value
Whatever other privileges normally are required for the statements executed by the trigger.
Within a trigger body, the
CURRENT_USER function returns the
account used to check privileges at trigger activation time. This
is the DEFINER user, not the user whose actions
caused the trigger to be activated. For information about user
auditing within triggers, see
Section 6.2.22, “SQL-Based Account Activity Auditing”.
If you use LOCK TABLES to lock a
table that has triggers, the tables used within the trigger are
also locked, as described in
LOCK TABLES and Triggers.
For additional discussion of trigger use, see Section 24.3.1, “Trigger Syntax and Examples”.
CREATE
[OR REPLACE]
[ALGORITHM = {UNDEFINED | MERGE | TEMPTABLE}]
[DEFINER = user]
[SQL SECURITY { DEFINER | INVOKER }]
VIEW view_name [(column_list)]
AS select_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]
The CREATE VIEW statement creates a
new view, or replaces an existing view if the OR
REPLACE clause is given. If the view does not exist,
CREATE OR REPLACE
VIEW is the same as CREATE
VIEW. If the view does exist,
CREATE OR REPLACE
VIEW replaces it.
For information about restrictions on view use, see Section 24.9, “Restrictions on Views”.
The select_statement is a
SELECT statement that provides the
definition of the view. (Selecting from the view selects, in
effect, using the SELECT
statement.) The select_statement can
select from base tables, other views. Beginning with MySQL 8.0.19,
the SELECT statement can use a
VALUES statement as its source, or
can be replaced with a TABLE
statement, as with
CREATE TABLE
... SELECT.
The view definition is “frozen” at creation time and
is not affected by subsequent changes to the definitions of the
underlying tables. For example, if a view is defined as
SELECT * on a table, new columns added to the
table later do not become part of the view, and columns dropped
from the table will result in an error when selecting from the
view.
The ALGORITHM clause affects how MySQL
processes the view. The DEFINER and
SQL SECURITY clauses specify the security
context to be used when checking access privileges at view
invocation time. The WITH CHECK OPTION clause
can be given to constrain inserts or updates to rows in tables
referenced by the view. These clauses are described later in this
section.
The CREATE VIEW statement requires
the CREATE VIEW privilege for the
view, and some privilege for each column selected by the
SELECT statement. For columns used
elsewhere in the SELECT statement,
you must have the SELECT privilege.
If the OR REPLACE clause is present, you must
also have the DROP privilege for
the view. If the DEFINER clause is present, the
privileges required depend on the user
value, as discussed in Section 24.6, “Stored Object Access Control”.
When a view is referenced, privilege checking occurs as described later in this section.
A view belongs to a database. By default, a new view is created in
the default database. To create the view explicitly in a given
database, use db_name.view_name syntax
to qualify the view name with the database name:
CREATE VIEW test.v AS SELECT * FROM t;
Unqualified table or view names in the
SELECT statement are also
interpreted with respect to the default database. A view can refer
to tables or views in other databases by qualifying the table or
view name with the appropriate database name.
Within a database, base tables and views share the same namespace, so a base table and a view cannot have the same name.
Columns retrieved by the SELECT
statement can be simple references to table columns, or
expressions that use functions, constant values, operators, and so
forth.
A view must have unique column names with no duplicates, just like
a base table. By default, the names of the columns retrieved by
the SELECT statement are used for
the view column names. To define explicit names for the view
columns, specify the optional
column_list clause as a list of
comma-separated identifiers. The number of names in
column_list must be the same as the
number of columns retrieved by the
SELECT statement.
A view can be created from many kinds of
SELECT statements. It can refer to
base tables or other views. It can use joins,
UNION, and subqueries. The
SELECT need not even refer to any
tables:
CREATE VIEW v_today (today) AS SELECT CURRENT_DATE;
The following example defines a view that selects two columns from another table as well as an expression calculated from those columns:
mysql>CREATE TABLE t (qty INT, price INT);mysql>INSERT INTO t VALUES(3, 50);mysql>CREATE VIEW v AS SELECT qty, price, qty*price AS value FROM t;mysql>SELECT * FROM v;+------+-------+-------+ | qty | price | value | +------+-------+-------+ | 3 | 50 | 150 | +------+-------+-------+
A view definition is subject to the following restrictions:
The SELECT statement cannot
refer to system variables or user-defined variables.
Within a stored program, the
SELECT statement cannot refer
to program parameters or local variables.
The SELECT statement cannot
refer to prepared statement parameters.
Any table or view referred to in the definition must exist.
If, after the view has been created, a table or view that the
definition refers to is dropped, use of the view results in an
error. To check a view definition for problems of this kind,
use the CHECK TABLE statement.
The definition cannot refer to a TEMPORARY
table, and you cannot create a TEMPORARY
view.
You cannot associate a trigger with a view.
Aliases for column names in the
SELECT statement are checked
against the maximum column length of 64 characters (not the
maximum alias length of 256 characters).
ORDER BY is permitted in a view definition, but
it is ignored if you select from a view using a statement that has
its own ORDER BY.
For other options or clauses in the definition, they are added to
the options or clauses of the statement that references the view,
but the effect is undefined. For example, if a view definition
includes a LIMIT clause, and you select from
the view using a statement that has its own
LIMIT clause, it is undefined which limit
applies. This same principle applies to options such as
ALL, DISTINCT, or
SQL_SMALL_RESULT that follow the
SELECT keyword, and to clauses such
as INTO, FOR UPDATE,
FOR SHARE, LOCK IN SHARE
MODE, and PROCEDURE.
The results obtained from a view may be affected if you change the query processing environment by changing system variables:
mysql>CREATE VIEW v (mycol) AS SELECT 'abc';Query OK, 0 rows affected (0.01 sec) mysql>SET sql_mode = '';Query OK, 0 rows affected (0.00 sec) mysql>SELECT "mycol" FROM v;+-------+ | mycol | +-------+ | mycol | +-------+ 1 row in set (0.01 sec) mysql>SET sql_mode = 'ANSI_QUOTES';Query OK, 0 rows affected (0.00 sec) mysql>SELECT "mycol" FROM v;+-------+ | mycol | +-------+ | abc | +-------+ 1 row in set (0.00 sec)
The DEFINER and SQL SECURITY
clauses determine which MySQL account to use when checking access
privileges for the view when a statement is executed that
references the view. The valid SQL SECURITY
characteristic values are DEFINER (the default)
and INVOKER. These indicate that the required
privileges must be held by the user who defined or invoked the
view, respectively.
If the DEFINER clause is present, the
user value should be a MySQL account
specified as
',
user_name'@'host_name'CURRENT_USER, or
CURRENT_USER(). The permitted
user values depend on the privileges
you hold, as discussed in
Section 24.6, “Stored Object Access Control”. Also see that section
for additional information about view security.
If the DEFINER clause is omitted, the default
definer is the user who executes the CREATE
VIEW statement. This is the same as specifying
DEFINER = CURRENT_USER explicitly.
Within a view definition, the
CURRENT_USER function returns the
view's DEFINER value by default. For views
defined with the SQL SECURITY INVOKER
characteristic, CURRENT_USER
returns the account for the view's invoker. For information about
user auditing within views, see
Section 6.2.22, “SQL-Based Account Activity Auditing”.
Within a stored routine that is defined with the SQL
SECURITY DEFINER characteristic,
CURRENT_USER returns the routine's
DEFINER value. This also affects a view defined
within such a routine, if the view definition contains a
DEFINER value of
CURRENT_USER.
MySQL checks view privileges like this:
At view definition time, the view creator must have the
privileges needed to use the top-level objects accessed by the
view. For example, if the view definition refers to table
columns, the creator must have some privilege for each column
in the select list of the definition, and the
SELECT privilege for each
column used elsewhere in the definition. If the definition
refers to a stored function, only the privileges needed to
invoke the function can be checked. The privileges required at
function invocation time can be checked only as it executes:
For different invocations, different execution paths within
the function might be taken.
The user who references a view must have appropriate
privileges to access it (SELECT
to select from it, INSERT to
insert into it, and so forth.)
When a view has been referenced, privileges for objects
accessed by the view are checked against the privileges held
by the view DEFINER account or invoker,
depending on whether the SQL SECURITY
characteristic is DEFINER or
INVOKER, respectively.
If reference to a view causes execution of a stored function,
privilege checking for statements executed within the function
depend on whether the function SQL SECURITY
characteristic is DEFINER or
INVOKER. If the security characteristic is
DEFINER, the function runs with the
privileges of the DEFINER account. If the
characteristic is INVOKER, the function
runs with the privileges determined by the view's SQL
SECURITY characteristic.
Example: A view might depend on a stored function, and that
function might invoke other stored routines. For example, the
following view invokes a stored function f():
CREATE VIEW v AS SELECT * FROM t WHERE t.id = f(t.name);
Suppose that f() contains a statement such as
this:
IF name IS NULL then CALL p1(); ELSE CALL p2(); END IF;
The privileges required for executing statements within
f() need to be checked when
f() executes. This might mean that privileges
are needed for p1() or p2(),
depending on the execution path within f().
Those privileges must be checked at runtime, and the user who must
possess the privileges is determined by the SQL
SECURITY values of the view v and the
function f().
The DEFINER and SQL SECURITY
clauses for views are extensions to standard SQL. In standard SQL,
views are handled using the rules for SQL SECURITY
DEFINER. The standard says that the definer of the view,
which is the same as the owner of the view's schema, gets
applicable privileges on the view (for example,
SELECT) and may grant them. MySQL
has no concept of a schema “owner”, so MySQL adds a
clause to identify the definer. The DEFINER
clause is an extension where the intent is to have what the
standard has; that is, a permanent record of who defined the view.
This is why the default DEFINER value is the
account of the view creator.
The optional ALGORITHM clause is a MySQL
extension to standard SQL. It affects how MySQL processes the
view. ALGORITHM takes three values:
MERGE, TEMPTABLE, or
UNDEFINED. For more information, see
Section 24.5.2, “View Processing Algorithms”, as well as
Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions
with Merging or Materialization”.
Some views are updatable. That is, you can use them in statements
such as UPDATE,
DELETE, or
INSERT to update the contents of
the underlying table. For a view to be updatable, there must be a
one-to-one relationship between the rows in the view and the rows
in the underlying table. There are also certain other constructs
that make a view nonupdatable.
A generated column in a view is considered updatable because it is
possible to assign to it. However, if such a column is updated
explicitly, the only permitted value is
DEFAULT. For information about generated
columns, see Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
The WITH CHECK OPTION clause can be given for
an updatable view to prevent inserts or updates to rows except
those for which the WHERE clause in the
select_statement is true.
In a WITH CHECK OPTION clause for an updatable
view, the LOCAL and CASCADED
keywords determine the scope of check testing when the view is
defined in terms of another view. The LOCAL
keyword restricts the CHECK OPTION only to the
view being defined. CASCADED causes the checks
for underlying views to be evaluated as well. When neither keyword
is given, the default is CASCADED.
For more information about updatable views and the WITH
CHECK OPTION clause, see
Section 24.5.3, “Updatable and Insertable Views”, and
Section 24.5.4, “The View WITH CHECK OPTION Clause”.
DROP {DATABASE | SCHEMA} [IF EXISTS] db_name
DROP DATABASE drops all tables in
the database and deletes the database. Be
very careful with this statement! To use
DROP DATABASE, you need the
DROP privilege on the database.
DROP
SCHEMA is a synonym for DROP
DATABASE.
When a database is dropped, privileges granted specifically for the database are not automatically dropped. They must be dropped manually. See Section 13.7.1.6, “GRANT Statement”.
IF EXISTS is used to prevent an error from
occurring if the database does not exist.
If the default database is dropped, the default database is unset
(the DATABASE() function returns
NULL).
If you use DROP DATABASE on a
symbolically linked database, both the link and the original
database are deleted.
DROP DATABASE returns the number of
tables that were removed.
The DROP DATABASE statement removes
from the given database directory those files and directories that
MySQL itself may create during normal operation. This includes all
files with the extensions shown in the following list:
.BAK
.DAT
.HSH
.MRG
.MYD
.MYI
.cfg
.db
.ibd
.ndb
If other files or directories remain in the database directory
after MySQL removes those just listed, the database directory
cannot be removed. In this case, you must remove any remaining
files or directories manually and issue the
DROP DATABASE statement again.
Dropping a database does not remove any
TEMPORARY tables that were created in that
database. TEMPORARY tables are automatically
removed when the session that created them ends. See
Section 13.1.20.2, “CREATE TEMPORARY TABLE Statement”.
You can also drop databases with mysqladmin. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
DROP EVENT [IF EXISTS] event_name
This statement drops the event named
event_name. The event immediately
ceases being active, and is deleted completely from the server.
If the event does not exist, the error ERROR 1517
(HY000): Unknown event
'event_name' results. You
can override this and cause the statement to generate a warning
for nonexistent events instead using IF EXISTS.
This statement requires the EVENT
privilege for the schema to which the event to be dropped belongs.
The DROP FUNCTION statement is used
to drop stored functions and user-defined functions (UDFs):
For information about dropping stored functions, see Section 13.1.29, “DROP PROCEDURE and DROP FUNCTION Statements”.
For information about dropping user-defined functions, see Section 13.7.4.2, “DROP FUNCTION Statement”.
DROP INDEXindex_nameONtbl_name[algorithm_option|lock_option] ...algorithm_option: ALGORITHM [=] {DEFAULT | INPLACE | COPY}lock_option: LOCK [=] {DEFAULT | NONE | SHARED | EXCLUSIVE}
DROP INDEX drops the index named
index_name from the table
tbl_name. This statement is mapped to
an ALTER TABLE statement to drop
the index. See Section 13.1.9, “ALTER TABLE Statement”.
To drop a primary key, the index name is always
PRIMARY, which must be specified as a quoted
identifier because PRIMARY is a reserved word:
DROP INDEX `PRIMARY` ON t;
Indexes on variable-width columns of
NDB tables are dropped online; that
is, without any table copying. The table is not locked against
access from other NDB Cluster API nodes, although it is locked
against other operations on the same API node
for the duration of the operation. This is done automatically by
the server whenever it determines that it is possible to do so;
you do not have to use any special SQL syntax or server options to
cause it to happen.
ALGORITHM and LOCK clauses
may be given to influence the table copying method and level of
concurrency for reading and writing the table while its indexes
are being modified. They have the same meaning as for the
ALTER TABLE statement. For more
information, see Section 13.1.9, “ALTER TABLE Statement”
MySQL NDB Cluster supports online operations using the same
ALGORITHM=INPLACE syntax supported in the
standard MySQL Server. See
Section 22.5.11, “Online Operations with ALTER TABLE in NDB Cluster”, for more
information.
DROP LOGFILE GROUPlogfile_groupENGINE [=]engine_name
This statement drops the log file group named
logfile_group. The log file group must
already exist or an error results. (For information on creating
log file groups, see Section 13.1.16, “CREATE LOGFILE GROUP Statement”.)
Before dropping a log file group, you must drop all tablespaces
that use that log file group for UNDO
logging.
The required ENGINE clause provides the name of
the storage engine used by the log file group to be dropped.
Currently, the only permitted values for
engine_name are
NDB and
NDBCLUSTER.
DROP LOGFILE GROUP is useful only
with Disk Data storage for NDB Cluster. See
Section 22.5.10, “NDB Cluster Disk Data Tables”.
DROP {PROCEDURE | FUNCTION} [IF EXISTS] sp_name
This statement is used to drop a stored procedure or function.
That is, the specified routine is removed from the server. You
must have the ALTER ROUTINE
privilege for the routine. (If the
automatic_sp_privileges system variable is
enabled, that privilege and EXECUTE
are granted automatically to the routine creator when the routine
is created and dropped from the creator when the routine is
dropped. See Section 24.2.2, “Stored Routines and MySQL Privileges”.)
The IF EXISTS clause is a MySQL extension. It
prevents an error from occurring if the procedure or function does
not exist. A warning is produced that can be viewed with
SHOW WARNINGS.
DROP FUNCTION is also used to drop
user-defined functions (see Section 13.7.4.2, “DROP FUNCTION Statement”).
DROP SERVER [ IF EXISTS ] server_name
Drops the server definition for the server named
. The
corresponding row in the server_namemysql.servers table is
deleted. This statement requires the
SUPER privilege.
Dropping a server for a table does not affect any
FEDERATED tables that used this connection
information when they were created. See
Section 13.1.18, “CREATE SERVER Statement”.
DROP SERVER causes an implicit commit. See
Section 13.3.3, “Statements That Cause an Implicit Commit”.
DROP SERVER is not written to the binary log,
regardless of the logging format that is in use.
DROP SPATIAL REFERENCE SYSTEM
[IF EXISTS]
srid
srid: 32-bit unsigned integer
This statement removes a
spatial reference
system (SRS) definition from the data dictionary. It
requires the SUPER privilege.
Example:
DROP SPATIAL REFERENCE SYSTEM 4120;
If no SRS definition with the SRID value exists, an error occurs
unless IF EXISTS is specified. In that case, a
warning occurs rather than an error.
If the SRID value is used by some column in an existing table, an error occurs. For example:
mysql> DROP SPATIAL REFERENCE SYSTEM 4326;
ERROR 3716 (SR005): Can't modify SRID 4326. There is at
least one column depending on it.
To identify which column or columns use the SRID, use this query:
SELECT * FROM INFORMATION_SCHEMA.ST_GEOMETRY_COLUMNS WHERE SRS_ID=4326;
SRID values must be in the range of 32-bit unsigned integers, with these restrictions:
SRID 0 is a valid SRID but cannot be used with
DROP SPATIAL REFERENCE SYSTEM.
If the value is in a reserved SRID range, a warning occurs. Reserved ranges are [0, 32767] (reserved by EPSG), [60,000,000, 69,999,999] (reserved by EPSG), and [2,000,000,000, 2,147,483,647] (reserved by MySQL). EPSG stands for the European Petroleum Survey Group.
Users should not drop SRSs with SRIDs in the reserved ranges. If system-installed SRSs are dropped, the SRS definitions may be recreated for MySQL upgrades.
DROP [TEMPORARY] TABLE [IF EXISTS]
tbl_name [, tbl_name] ...
[RESTRICT | CASCADE]
DROP TABLE removes one or more
tables. You must have the DROP
privilege for each table.
Be careful with this statement! For each table, it removes the table definition and all table data. If the table is partitioned, the statement removes the table definition, all its partitions, all data stored in those partitions, and all partition definitions associated with the dropped table.
Dropping a table also drops any triggers for the table.
DROP TABLE causes an implicit
commit, except when used with the TEMPORARY
keyword. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
When a table is dropped, privileges granted specifically for the table are not automatically dropped. They must be dropped manually. See Section 13.7.1.6, “GRANT Statement”.
If any tables named in the argument list do not exist,
DROP TABLE behavior depends on
whether the IF EXISTS clause is given:
Without IF EXISTS, the statement fails with
an error indicating which nonexisting tables it was unable to
drop, and no changes are made.
With IF EXISTS, no error occurs for
nonexisting tables. The statement drops all named tables that
do exist, and generates a NOTE diagnostic
for each nonexistent table. These notes can be displayed with
SHOW WARNINGS. See
Section 13.7.7.40, “SHOW WARNINGS Statement”.
IF EXISTS can also be useful for dropping
tables in unusual circumstances under which there is an entry in
the data dictionary but no table managed by the storage engine.
(For example, if an abnormal server exit occurs after removal of
the table from the storage engine but before removal of the data
dictionary entry.)
The TEMPORARY keyword has the following
effects:
The statement drops only TEMPORARY tables.
The statement does not cause an implicit commit.
No access rights are checked. A TEMPORARY
table is visible only with the session that created it, so no
check is necessary.
Including the TEMPORARY keyword is a good way
to prevent accidentally dropping non-TEMPORARY
tables.
The RESTRICT and CASCADE
keywords do nothing. They are permitted to make porting easier
from other database systems.
DROP TABLE is not supported with
all innodb_force_recovery
settings. See Section 15.21.2, “Forcing InnoDB Recovery”.
DROP [UNDO] TABLESPACEtablespace_name[ENGINE [=]engine_name]
This statement drops a tablespace that was previously created
using CREATE TABLESPACE. It is
supported by the NDB and
InnoDB storage engines.
The UNDO keyword, introduced in MySQL 8.0.14,
must be specified to drop an undo tablespace. Only undo
tablespaces created using
CREATE UNDO
TABLESPACE syntax can be dropped. An undo tablespace
must be in an empty state before it can be
dropped. For more information, see
Section 15.6.3.4, “Undo Tablespaces”.
ENGINE sets the storage engine that uses the
tablespace, where engine_name is the
name of the storage engine. Currently, the values
InnoDB and NDB are
supported. If not set, the value of
default_storage_engine is used.
If it is not the same as the storage engine used to create the
tablespace, the DROP TABLESPACE statement
fails.
is a
case-sensitive identifier in MySQL.
tablespace_name
For an InnoDB general tablespace, all tables
must be dropped from the tablespace prior to a DROP
TABLESPACE operation. If the tablespace is not empty,
DROP TABLESPACE returns an error.
An NDB tablespace to be dropped must not
contain any data files; in other words, before you can drop an
NDB tablespace, you must first drop each of its
data files using
ALTER TABLESPACE
... DROP DATAFILE.
A general InnoDB tablespace is not deleted
automatically when the last table in the tablespace is
dropped. The tablespace must be dropped explicitly using
DROP TABLESPACE
.
tablespace_name
A DROP DATABASE operation can
drop tables that belong to a general tablespace but it cannot
drop the tablespace, even if the operation drops all tables
that belong to the tablespace. The tablespace must be dropped
explicitly using DROP TABLESPACE
.
tablespace_name
Similar to the system tablespace, truncating or dropping
tables stored in a general tablespace creates free space
internally in the general tablespace
.ibd data file which can
only be used for new InnoDB data. Space is
not released back to the operating system as it is for
file-per-table tablespaces.
This example demonstrates how to drop an InnoDB
general tablespace. The general tablespace ts1
is created with a single table. Before dropping the tablespace,
the table must be dropped.
mysql>CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;mysql>CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 Engine=InnoDB;mysql>DROP TABLE t1;mysql>DROP TABLESPACE ts1;
This example demonstrates dropping an undo tablespace. An undo
tablespace must be in an empty state before it
can be dropped. For more information, see
Section 15.6.3.4, “Undo Tablespaces”.
mysql> DROP UNDO TABLESPACE undo_003;
This example shows how to drop an NDB
tablespace myts having a data file named
mydata-1.dat after first creating the
tablespace, and assumes the existence of a log file group named
mylg (see
Section 13.1.16, “CREATE LOGFILE GROUP Statement”).
mysql>CREATE TABLESPACE myts->ADD DATAFILE 'mydata-1.dat'->USE LOGFILE GROUP mylg->ENGINE=NDB;
You must remove all data files from the tablespace using
ALTER TABLESPACE, as shown here,
before it can be dropped:
mysql>ALTER TABLESPACE myts->DROP DATAFILE 'mydata-1.dat'->ENGINE=NDB;mysql>DROP TABLESPACE myts;
DROP TRIGGER [IF EXISTS] [schema_name.]trigger_name
This statement drops a trigger. The schema (database) name is
optional. If the schema is omitted, the trigger is dropped from
the default schema. DROP TRIGGER
requires the TRIGGER privilege for
the table associated with the trigger.
Use IF EXISTS to prevent an error from
occurring for a trigger that does not exist. A
NOTE is generated for a nonexistent trigger
when using IF EXISTS. See
Section 13.7.7.40, “SHOW WARNINGS Statement”.
Triggers for a table are also dropped if you drop the table.
DROP VIEW [IF EXISTS]
view_name [, view_name] ...
[RESTRICT | CASCADE]
DROP VIEW removes one or more
views. You must have the DROP
privilege for each view.
If any views named in the argument list do not exist, the statement fails with an error indicating by name which nonexisting views it was unable to drop, and no changes are made.
In MySQL 5.7 and earlier,
DROP VIEW returns an error if any
views named in the argument list do not exist, but also drops
all views in the list that do exist. Due to the change in
behavior in MySQL 8.0, a partially completed
DROP VIEW operation on a MySQL
5.7 master fails when replicated on a MySQL
8.0 slave. To avoid this failure scenario, use
IF EXISTS syntax in DROP
VIEW statements to prevent an error from occurring for
views that do not exist. For more information, see
Section 13.1.1, “Atomic Data Definition Statement Support”.
The IF EXISTS clause prevents an error from
occurring for views that don't exist. When this clause is given, a
NOTE is generated for each nonexistent view.
See Section 13.7.7.40, “SHOW WARNINGS Statement”.
RESTRICT and CASCADE, if
given, are parsed and ignored.
RENAME TABLE
tbl_name TO new_tbl_name
[, tbl_name2 TO new_tbl_name2] ...
RENAME TABLE renames one or more
tables. You must have ALTER and
DROP privileges for the original
table, and CREATE and
INSERT privileges for the new
table.
For example, to rename a table named old_table
to new_table, use this statement:
RENAME TABLE old_table TO new_table;
That statement is equivalent to the following
ALTER TABLE statement:
ALTER TABLE old_table RENAME new_table;
RENAME TABLE, unlike ALTER
TABLE, can rename multiple tables within a single
statement:
RENAME TABLE old_table1 TO new_table1,
old_table2 TO new_table2,
old_table3 TO new_table3;
Renaming operations are performed left to right. Thus, to swap two
table names, do this (assuming that a table with the intermediary
name tmp_table does not already exist):
RENAME TABLE old_table TO tmp_table,
new_table TO old_table,
tmp_table TO new_table;Metadata locks on tables are acquired in name order, which in some cases can make a difference in operation outcome when multiple transactions execute concurrently. See Section 8.11.4, “Metadata Locking”.
As of MySQL 8.0.13, you can rename tables locked with a
LOCK TABLES statement, provided
that they are locked with a WRITE lock or are
the product of renaming WRITE-locked tables
from earlier steps in a multiple-table rename operation. For
example, this is permitted:
LOCK TABLE old_table1 WRITE;
RENAME TABLE old_table1 TO new_table1,
new_table1 TO new_table2;This is not permitted:
LOCK TABLE old_table1 READ;
RENAME TABLE old_table1 TO new_table1,
new_table1 TO new_table2;
Prior to MySQL 8.0.13, to execute RENAME TABLE,
there must be no tables locked with LOCK
TABLES.
With the transaction table locking conditions satisfied, the rename operation is done atomically; no other session can access any of the tables while the rename is in progress.
If any errors occur during a RENAME TABLE, the
statement fails and no changes are made.
You can use RENAME TABLE to move a table from
one database to another:
RENAME TABLEcurrent_db.tbl_nameTOother_db.tbl_name;
Using this method to move all tables from one database to a different one in effect renames the database (an operation for which MySQL has no single statement), except that the original database continues to exist, albeit with no tables.
Like RENAME TABLE, ALTER TABLE ...
RENAME can also be used to move a table to a different
database. Regardless of the statement used, if the rename
operation would move the table to a database located on a
different file system, the success of the outcome is platform
specific and depends on the underlying operating system calls used
to move table files.
If a table has triggers, attempts to rename the table into a
different database fail with a Trigger in wrong
schema
(ER_TRG_IN_WRONG_SCHEMA) error.
An unencrypted table can be moved to an encryption-enabled
database and vice versa. However, if the
table_encryption_privilege_check
variable is enabled, the
TABLE_ENCRYPTION_ADMIN privilege is
required if the table encryption setting differs from the default
database encryption.
To rename TEMPORARY tables, RENAME
TABLE does not work. Use ALTER
TABLE instead.
RENAME TABLE works for views, except that views
cannot be renamed into a different database.
Any privileges granted specifically for a renamed table or view are not migrated to the new name. They must be changed manually.
RENAME TABLE changes
internally generated foreign key constraint names and user-defined
foreign key constraint names that begin with the string
“tbl_name TO
new_tbl_nametbl_name_ibfk_” to
reflect the new table name. InnoDB interprets
foreign key constraint names that begin with the string
“tbl_name_ibfk_” as
internally generated names.
Foreign key constraint names that point to the renamed table are automatically updated unless there is a conflict, in which case the statement fails with an error. A conflict occurs if the renamed constraint name already exists. In such cases, you must drop and re-create the foreign keys for them to function properly.
RENAME TABLE changes
internally generated and user-defined tbl_name TO
new_tbl_nameCHECK
constraint names that begin with the string
“tbl_name_chk_” to reflect
the new table name. MySQL interprets CHECK
constraint names that begin with the string
“tbl_name_chk_” as
internally generated names. Example:
mysql>SHOW CREATE TABLE t1\G*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `i1` int(11) DEFAULT NULL, `i2` int(11) DEFAULT NULL, CONSTRAINT `t1_chk_1` CHECK ((`i1` > 0)), CONSTRAINT `t1_chk_2` CHECK ((`i2` < 0)) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci 1 row in set (0.02 sec) mysql>RENAME TABLE t1 TO t3;Query OK, 0 rows affected (0.03 sec) mysql>SHOW CREATE TABLE t3\G*************************** 1. row *************************** Table: t3 Create Table: CREATE TABLE `t3` ( `i1` int(11) DEFAULT NULL, `i2` int(11) DEFAULT NULL, CONSTRAINT `t3_chk_1` CHECK ((`i1` > 0)), CONSTRAINT `t3_chk_2` CHECK ((`i2` < 0)) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci 1 row in set (0.01 sec)
TRUNCATE [TABLE] tbl_name
TRUNCATE TABLE empties a table
completely. It requires the DROP
privilege. Logically, TRUNCATE
TABLE is similar to a
DELETE statement that deletes all
rows, or a sequence of DROP TABLE
and CREATE TABLE statements.
To achieve high performance, TRUNCATE
TABLE bypasses the DML method of deleting data. Thus, it
does not cause ON DELETE triggers to fire, it
cannot be performed for InnoDB tables with
parent-child foreign key relationships, and it cannot be rolled
back like a DML operation. However, TRUNCATE
TABLE operations on tables that use an atomic
DDL-supported storage engine are either fully committed or rolled
back if the server halts during their operation. For more
information, see Section 13.1.1, “Atomic Data Definition Statement Support”.
Although TRUNCATE TABLE is similar
to DELETE, it is classified as a
DDL statement rather than a DML statement. It differs from
DELETE in the following ways:
Truncate operations drop and re-create the table, which is much faster than deleting rows one by one, particularly for large tables.
Truncate operations cause an implicit commit, and so cannot be rolled back. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
Truncation operations cannot be performed if the session holds an active table lock.
TRUNCATE TABLE fails for an
InnoDB table or
NDB table if there are any
FOREIGN KEY constraints from other tables
that reference the table. Foreign key constraints between
columns of the same table are permitted.
Truncation operations do not return a meaningful value for the number of deleted rows. The usual result is “0 rows affected,” which should be interpreted as “no information.”
As long as the table definition is valid, the table can be
re-created as an empty table with
TRUNCATE TABLE, even if the
data or index files have become corrupted.
Any AUTO_INCREMENT value is reset to its
start value. This is true even for MyISAM
and InnoDB, which normally do not reuse
sequence values.
When used with partitioned tables,
TRUNCATE TABLE preserves the
partitioning; that is, the data and index files are dropped
and re-created, while the partition definitions are
unaffected.
The TRUNCATE TABLE statement
does not invoke ON DELETE triggers.
Truncating a corrupted InnoDB table is
supported.
TRUNCATE TABLE for a table closes
all handlers for the table that were opened with
HANDLER OPEN.
TRUNCATE TABLE is treated for
purposes of binary logging and replication as
DROP TABLE followed by
CREATE TABLE—that is, as DDL
rather than DML. This is due to the fact that, when using
InnoDB and other transactional
storage engines where the transaction isolation level does not
permit statement-based logging (READ
COMMITTED or READ
UNCOMMITTED), the statement was not logged and
replicated when using STATEMENT or
MIXED logging mode. (Bug #36763) However, it is
still applied on replication slaves using
InnoDB in the manner described
previously.
In MySQL 5.7 and earlier, on a system with a large buffer pool and
innodb_adaptive_hash_index
enabled, a TRUNCATE TABLE operation could cause
a temporary drop in system performance due to an LRU scan that
occurred when removing the table's adaptive hash index entries
(Bug #68184). The remapping of TRUNCATE
TABLE to DROP TABLE and
CREATE TABLE in MySQL 8.0 avoids
the problematic LRU scan.
TRUNCATE TABLE can be used with
Performance Schema summary tables, but the effect is to reset the
summary columns to 0 or NULL, not to remove
rows. See Section 26.12.18, “Performance Schema Summary Tables”.
CALLsp_name([parameter[,...]]) CALLsp_name[()]
The CALL statement invokes a stored
procedure that was defined previously with
CREATE PROCEDURE.
Stored procedures that take no arguments can be invoked without
parentheses. That is, CALL p() and
CALL p are equivalent.
CALL can pass back values to its
caller using parameters that are declared as
OUT or INOUT parameters.
When the procedure returns, a client program can also obtain the
number of rows affected for the final statement executed within
the routine: At the SQL level, call the
ROW_COUNT() function; from the C
API, call the
mysql_affected_rows() function.
For information about the effect of unhandled conditions on procedure parameters, see Section 13.6.7.8, “Condition Handling and OUT or INOUT Parameters”.
To get back a value from a procedure using an
OUT or INOUT parameter, pass
the parameter by means of a user variable, and then check the
value of the variable after the procedure returns. (If you are
calling the procedure from within another stored procedure or
function, you can also pass a routine parameter or local routine
variable as an IN or INOUT
parameter.) For an INOUT parameter, initialize
its value before passing it to the procedure. The following
procedure has an OUT parameter that the
procedure sets to the current server version, and an
INOUT value that the procedure increments by
one from its current value:
CREATE PROCEDURE p (OUT ver_param VARCHAR(25), INOUT incr_param INT) BEGIN # Set value of OUT parameter SELECT VERSION() INTO ver_param; # Increment value of INOUT parameter SET incr_param = incr_param + 1; END;
Before calling the procedure, initialize the variable to be passed
as the INOUT parameter. After calling the
procedure, the values of the two variables will have been set or
modified:
mysql>SET @increment = 10;mysql>CALL p(@version, @increment);mysql>SELECT @version, @increment;+--------------------+------------+ | @version | @increment | +--------------------+------------+ | 8.0.3-rc-debug-log | 11 | +--------------------+------------+
In prepared CALL statements used
with PREPARE and
EXECUTE, placeholders can be used
for IN parameters, OUT, and
INOUT parameters. These types of parameters can
be used as follows:
mysql>SET @increment = 10;mysql>PREPARE s FROM 'CALL p(?, ?)';mysql>EXECUTE s USING @version, @increment;mysql>SELECT @version, @increment;+--------------------+------------+ | @version | @increment | +--------------------+------------+ | 8.0.3-rc-debug-log | 11 | +--------------------+------------+
To write C programs that use the
CALL SQL statement to execute
stored procedures that produce result sets, the
CLIENT_MULTI_RESULTS flag must be enabled. This
is because each CALL returns a
result to indicate the call status, in addition to any result sets
that might be returned by statements executed within the
procedure. CLIENT_MULTI_RESULTS must also be
enabled if CALL is used to execute
any stored procedure that contains prepared statements. It cannot
be determined when such a procedure is loaded whether those
statements will produce result sets, so it is necessary to assume
that they will.
CLIENT_MULTI_RESULTS can be enabled when you
call mysql_real_connect(), either
explicitly by passing the CLIENT_MULTI_RESULTS
flag itself, or implicitly by passing
CLIENT_MULTI_STATEMENTS (which also enables
CLIENT_MULTI_RESULTS).
CLIENT_MULTI_RESULTS is enabled by default.
To process the result of a CALL
statement executed using
mysql_query() or
mysql_real_query(), use a loop
that calls mysql_next_result() to
determine whether there are more results. For an example, see
Section 28.7.22, “C API Multiple Statement Execution Support”.
C programs can use the prepared-statement interface to execute
CALL statements and access
OUT and INOUT parameters.
This is done by processing the result of a
CALL statement using a loop that
calls mysql_stmt_next_result() to
determine whether there are more results. For an example, see
Section 28.7.24, “C API Prepared CALL Statement Support”. Languages that
provide a MySQL interface can use prepared
CALL statements to directly
retrieve OUT and INOUT
procedure parameters.
Metadata changes to objects referred to by stored programs are detected and cause automatic reparsing of the affected statements when the program is next executed. For more information, see Section 8.10.3, “Caching of Prepared Statements and Stored Programs”.
DELETE is a DML statement that
removes rows from a table.
A DELETE statement can start with a
WITH clause to define common table
expressions accessible within the
DELETE. See Section 13.2.15, “WITH (Common Table Expressions)”.
DELETE [LOW_PRIORITY] [QUICK] [IGNORE] FROMtbl_name[[AS]tbl_alias] [PARTITION (partition_name[,partition_name] ...)] [WHEREwhere_condition] [ORDER BY ...] [LIMITrow_count]
The DELETE statement deletes rows from
tbl_name and returns the number of
deleted rows. To check the number of deleted rows, call the
ROW_COUNT() function described in
Section 12.15, “Information Functions”.
The conditions in the optional WHERE clause
identify which rows to delete. With no WHERE
clause, all rows are deleted.
where_condition is an expression that
evaluates to true for each row to be deleted. It is specified as
described in Section 13.2.10, “SELECT Statement”.
If the ORDER BY clause is specified, the rows
are deleted in the order that is specified. The
LIMIT clause places a limit on the number of
rows that can be deleted. These clauses apply to single-table
deletes, but not multi-table deletes.
DELETE [LOW_PRIORITY] [QUICK] [IGNORE]
tbl_name[.*] [, tbl_name[.*]] ...
FROM table_references
[WHERE where_condition]
DELETE [LOW_PRIORITY] [QUICK] [IGNORE]
FROM tbl_name[.*] [, tbl_name[.*]] ...
USING table_references
[WHERE where_condition]
You need the DELETE privilege on a
table to delete rows from it. You need only the
SELECT privilege for any columns
that are only read, such as those named in the
WHERE clause.
When you do not need to know the number of deleted rows, the
TRUNCATE TABLE statement is a
faster way to empty a table than a
DELETE statement with no
WHERE clause. Unlike
DELETE,
TRUNCATE TABLE cannot be used
within a transaction or if you have a lock on the table. See
Section 13.1.37, “TRUNCATE TABLE Statement” and
Section 13.3.6, “LOCK TABLES and UNLOCK TABLES Statements”.
The speed of delete operations may also be affected by factors discussed in Section 8.2.5.3, “Optimizing DELETE Statements”.
To ensure that a given DELETE
statement does not take too much time, the MySQL-specific
LIMIT
clause for row_countDELETE specifies the
maximum number of rows to be deleted. If the number of rows to
delete is larger than the limit, repeat the
DELETE statement until the number of affected
rows is less than the LIMIT value.
You cannot delete from a table and select from the same table in a subquery.
DELETE supports explicit partition selection
using the PARTITION option, which takes a list
of the comma-separated names of one or more partitions or
subpartitions (or both) from which to select rows to be dropped.
Partitions not included in the list are ignored. Given a
partitioned table t with a partition named
p0, executing the statement DELETE
FROM t PARTITION (p0) has the same effect on the table
as executing ALTER
TABLE t TRUNCATE PARTITION (p0); in both cases, all rows
in partition p0 are dropped.
PARTITION can be used along with a
WHERE condition, in which case the condition is
tested only on rows in the listed partitions. For example,
DELETE FROM t PARTITION (p0) WHERE c < 5
deletes rows only from partition p0 for which
the condition c < 5 is true; rows in any
other partitions are not checked and thus not affected by the
DELETE.
The PARTITION option can also be used in
multiple-table DELETE statements. You can use
up to one such option per table named in the
FROM option.
For more information and examples, see Section 23.5, “Partition Selection”.
If you delete the row containing the maximum value for an
AUTO_INCREMENT column, the value is not reused
for a MyISAM or InnoDB
table. If you delete all rows in the table with DELETE
FROM (without a
tbl_nameWHERE clause) in
autocommit mode, the sequence
starts over for all storage engines except
InnoDB and MyISAM. There are
some exceptions to this behavior for InnoDB
tables, as discussed in
Section 15.6.1.6, “AUTO_INCREMENT Handling in InnoDB”.
For MyISAM tables, you can specify an
AUTO_INCREMENT secondary column in a
multiple-column key. In this case, reuse of values deleted from
the top of the sequence occurs even for MyISAM
tables. See Section 3.6.9, “Using AUTO_INCREMENT”.
The DELETE statement supports the
following modifiers:
If you specify the LOW_PRIORITY modifier,
the server delays execution of the
DELETE until no other clients
are reading from the table. This affects only storage engines
that use only table-level locking (such as
MyISAM, MEMORY, and
MERGE).
For MyISAM tables, if you use the
QUICK modifier, the storage engine does not
merge index leaves during delete, which may speed up some
kinds of delete operations.
The IGNORE modifier causes MySQL to ignore
errors during the process of deleting rows. (Errors
encountered during the parsing stage are processed in the
usual manner.) Errors that are ignored due to the use of
IGNORE are returned as warnings. For more
information, see Comparison of the IGNORE Keyword and Strict SQL Mode.
If the DELETE statement includes an
ORDER BY clause, rows are deleted in the order
specified by the clause. This is useful primarily in conjunction
with LIMIT. For example, the following
statement finds rows matching the WHERE clause,
sorts them by timestamp_column, and deletes the
first (oldest) one:
DELETE FROM somelog WHERE user = 'jcole' ORDER BY timestamp_column LIMIT 1;
ORDER BY also helps to delete rows in an order
required to avoid referential integrity violations.
If you are deleting many rows from a large table, you may exceed
the lock table size for an InnoDB table. To
avoid this problem, or simply to minimize the time that the table
remains locked, the following strategy (which does not use
DELETE at all) might be helpful:
Select the rows not to be deleted into an empty table that has the same structure as the original table:
INSERT INTO t_copy SELECT * FROM t WHERE ... ;
Use RENAME TABLE to atomically
move the original table out of the way and rename the copy to
the original name:
RENAME TABLE t TO t_old, t_copy TO t;
Drop the original table:
DROP TABLE t_old;
No other sessions can access the tables involved while
RENAME TABLE executes, so the
rename operation is not subject to concurrency problems. See
Section 13.1.36, “RENAME TABLE Statement”.
In MyISAM tables, deleted rows are maintained
in a linked list and subsequent
INSERT operations reuse old row
positions. To reclaim unused space and reduce file sizes, use the
OPTIMIZE TABLE statement or the
myisamchk utility to reorganize tables.
OPTIMIZE TABLE is easier to use,
but myisamchk is faster. See
Section 13.7.3.4, “OPTIMIZE TABLE Statement”, and Section 4.6.4, “myisamchk — MyISAM Table-Maintenance Utility”.
The QUICK modifier affects whether index leaves
are merged for delete operations. DELETE QUICK
is most useful for applications where index values for deleted
rows are replaced by similar index values from rows inserted
later. In this case, the holes left by deleted values are reused.
DELETE QUICK is not useful when deleted values
lead to underfilled index blocks spanning a range of index values
for which new inserts occur again. In this case, use of
QUICK can lead to wasted space in the index
that remains unreclaimed. Here is an example of such a scenario:
Create a table that contains an indexed
AUTO_INCREMENT column.
Insert many rows into the table. Each insert results in an index value that is added to the high end of the index.
Delete a block of rows at the low end of the column range
using DELETE QUICK.
In this scenario, the index blocks associated with the deleted
index values become underfilled but are not merged with other
index blocks due to the use of QUICK. They
remain underfilled when new inserts occur, because new rows do not
have index values in the deleted range. Furthermore, they remain
underfilled even if you later use
DELETE without
QUICK, unless some of the deleted index values
happen to lie in index blocks within or adjacent to the
underfilled blocks. To reclaim unused index space under these
circumstances, use OPTIMIZE TABLE.
If you are going to delete many rows from a table, it might be
faster to use DELETE QUICK followed by
OPTIMIZE TABLE. This rebuilds the
index rather than performing many index block merge operations.
You can specify multiple tables in a
DELETE statement to delete rows
from one or more tables depending on the condition in the
WHERE clause. You cannot use ORDER
BY or LIMIT in a multiple-table
DELETE. The
table_references clause lists the
tables involved in the join, as described in
Section 13.2.10.2, “JOIN Clause”.
For the first multiple-table syntax, only matching rows from the
tables listed before the FROM clause are
deleted. For the second multiple-table syntax, only matching rows
from the tables listed in the FROM clause
(before the USING clause) are deleted. The
effect is that you can delete rows from many tables at the same
time and have additional tables that are used only for searching:
DELETE t1, t2 FROM t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id=t2.id AND t2.id=t3.id;
Or:
DELETE FROM t1, t2 USING t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id=t2.id AND t2.id=t3.id;
These statements use all three tables when searching for rows to
delete, but delete matching rows only from tables
t1 and t2.
The preceding examples use INNER JOIN, but
multiple-table DELETE statements
can use other types of join permitted in
SELECT statements, such as
LEFT JOIN. For example, to delete rows that
exist in t1 that have no match in
t2, use a LEFT JOIN:
DELETE t1 FROM t1 LEFT JOIN t2 ON t1.id=t2.id WHERE t2.id IS NULL;
The syntax permits .* after each
tbl_name for compatibility with
Access.
If you use a multiple-table DELETE
statement involving InnoDB tables for which
there are foreign key constraints, the MySQL optimizer might
process tables in an order that differs from that of their
parent/child relationship. In this case, the statement fails and
rolls back. Instead, you should delete from a single table and
rely on the ON DELETE capabilities that
InnoDB provides to cause the other tables to be
modified accordingly.
If you declare an alias for a table, you must use the alias when referring to the table:
DELETE t1 FROM test AS t1, test2 WHERE ...
Table aliases in a multiple-table
DELETE should be declared only in
the table_references part of the
statement. Elsewhere, alias references are permitted but not alias
declarations.
Correct:
DELETE a1, a2 FROM t1 AS a1 INNER JOIN t2 AS a2 WHERE a1.id=a2.id; DELETE FROM a1, a2 USING t1 AS a1 INNER JOIN t2 AS a2 WHERE a1.id=a2.id;
Incorrect:
DELETE t1 AS a1, t2 AS a2 FROM t1 INNER JOIN t2 WHERE a1.id=a2.id; DELETE FROM t1 AS a1, t2 AS a2 USING t1 INNER JOIN t2 WHERE a1.id=a2.id;
Table aliases are also supported for single-table
DELETE statements beginning with MySQL 8.0.16.
(Bug #89410,Bug #27455809)
DOexpr[,expr] ...
DO executes the expressions but
does not return any results. In most respects,
DO is shorthand for SELECT
, but has the
advantage that it is slightly faster when you do not care about
the result.
expr, ...
DO is useful primarily with
functions that have side effects, such as
RELEASE_LOCK().
Example: This SELECT statement
pauses, but also produces a result set:
mysql> SELECT SLEEP(5);
+----------+
| SLEEP(5) |
+----------+
| 0 |
+----------+
1 row in set (5.02 sec)
DO, on the other hand, pauses
without producing a result set.:
mysql> DO SLEEP(5);
Query OK, 0 rows affected (4.99 sec)
This could be useful, for example in a stored function or trigger, which prohibit statements that produce result sets.
DO only executes expressions. It
cannot be used in all cases where SELECT can be
used. For example, DO id FROM t1 is invalid
because it references a table.
HANDLERtbl_nameOPEN [ [AS]alias] HANDLERtbl_nameREADindex_name{ = | <= | >= | < | > } (value1,value2,...) [ WHEREwhere_condition] [LIMIT ... ] HANDLERtbl_nameREADindex_name{ FIRST | NEXT | PREV | LAST } [ WHEREwhere_condition] [LIMIT ... ] HANDLERtbl_nameREAD { FIRST | NEXT } [ WHEREwhere_condition] [LIMIT ... ] HANDLERtbl_nameCLOSE
The HANDLER statement provides direct access to
table storage engine interfaces. It is available for
InnoDB and MyISAM tables.
The HANDLER ... OPEN statement opens a table,
making it accessible using subsequent HANDLER ...
READ statements. This table object is not shared by
other sessions and is not closed until the session calls
HANDLER ... CLOSE or the session terminates.
If you open the table using an alias, further references to the
open table with other HANDLER statements must
use the alias rather than the table name. If you do not use an
alias, but open the table using a table name qualified by the
database name, further references must use the unqualified table
name. For example, for a table opened using
mydb.mytable, further references must use
mytable.
The first HANDLER ... READ syntax fetches a row
where the index specified satisfies the given values and the
WHERE condition is met. If you have a
multiple-column index, specify the index column values as a
comma-separated list. Either specify values for all the columns in
the index, or specify values for a leftmost prefix of the index
columns. Suppose that an index my_idx includes
three columns named col_a,
col_b, and col_c, in that
order. The HANDLER statement can specify values
for all three columns in the index, or for the columns in a
leftmost prefix. For example:
HANDLER ... READ my_idx = (col_a_val,col_b_val,col_c_val) ... HANDLER ... READ my_idx = (col_a_val,col_b_val) ... HANDLER ... READ my_idx = (col_a_val) ...
To employ the HANDLER interface to refer to a
table's PRIMARY KEY, use the quoted identifier
`PRIMARY`:
HANDLER tbl_name READ `PRIMARY` ...
The second HANDLER ... READ syntax fetches a
row from the table in index order that matches the
WHERE condition.
The third HANDLER ... READ syntax fetches a row
from the table in natural row order that matches the
WHERE condition. It is faster than
HANDLER when a full table
scan is desired. Natural row order is the order in which rows are
stored in a tbl_name READ
index_nameMyISAM table data file. This
statement works for InnoDB tables as well, but
there is no such concept because there is no separate data file.
Without a LIMIT clause, all forms of
HANDLER ... READ fetch a single row if one is
available. To return a specific number of rows, include a
LIMIT clause. It has the same syntax as for the
SELECT statement. See
Section 13.2.10, “SELECT Statement”.
HANDLER ... CLOSE closes a table that was
opened with HANDLER ... OPEN.
There are several reasons to use the HANDLER
interface instead of normal SELECT
statements:
HANDLER is faster than
SELECT:
A designated storage engine handler object is allocated
for the HANDLER ... OPEN. The object is
reused for subsequent HANDLER
statements for that table; it need not be reinitialized
for each one.
There is less parsing involved.
There is no optimizer or query-checking overhead.
The handler interface does not have to provide a
consistent look of the data (for example,
dirty reads are
permitted), so the storage engine can use optimizations
that SELECT does not
normally permit.
HANDLER makes it easier to port to MySQL
applications that use a low-level ISAM-like
interface. (See Section 15.20, “InnoDB memcached Plugin” for an
alternative way to adapt applications that use the key-value
store paradigm.)
HANDLER enables you to traverse a database
in a manner that is difficult (or even impossible) to
accomplish with SELECT. The
HANDLER interface is a more natural way to
look at data when working with applications that provide an
interactive user interface to the database.
HANDLER is a somewhat low-level statement. For
example, it does not provide consistency. That is,
HANDLER ... OPEN does not
take a snapshot of the table, and does not
lock the table. This means that after a HANDLER ...
OPEN statement is issued, table data can be modified (by
the current session or other sessions) and these modifications
might be only partially visible to HANDLER ...
NEXT or HANDLER ... PREV scans.
An open handler can be closed and marked for reopen, in which case the handler loses its position in the table. This occurs when both of the following circumstances are true:
Any session executes FLUSH
TABLES or DDL statements on the handler's table.
The session in which the handler is open executes
non-HANDLER statements that use tables.
TRUNCATE TABLE for a table closes
all handlers for the table that were opened with
HANDLER OPEN.
If a table is flushed with
FLUSH
TABLES was opened with tbl_name WITH READ
LOCKHANDLER, the
handler is implicitly flushed and loses its position.
IMPORT TABLE FROMsdi_file[,sdi_file] ...
The IMPORT TABLE statement imports
MyISAM tables based on information contained in
.sdi (serialized dictionary information)
metadata files. IMPORT TABLE
requires the FILE privilege to read
the .sdi and table content files, and the
CREATE privilege for the table to
be created.
Tables can be exported from one server using
mysqldump to write a file of SQL statements and
imported into another server using mysql to
process the dump file. IMPORT TABLE
provides a faster alternative using the “raw” table
files.
Prior to import, the files that provide the table content must be
placed in the appropriate schema directory for the import server,
and the .sdi file must be located in a
directory accessible to the server. For example, the
.sdi file can be placed in the directory
named by the secure_file_priv
system variable, or (if
secure_file_priv is empty) in a
directory under the server data directory.
The following example describes how to export
MyISAM tables named
employees and managers from
the hr schema of one server and import them
into the hr schema of another server. The
example uses these assumptions (to perform a similar operation on
your own system, modify the path names as appropriate):
For the export server,
export_basedir represents its base
directory, and its data directory is
.
export_basedir/data
For the import server,
import_basedir represents its base
directory, and its data directory is
.
import_basedir/data
Table files are exported from the export server into the
/tmp/export directory and this directory
is secure (not accessible to other users).
The import server uses /tmp/mysql-files
as the directory named by its
secure_file_priv system
variable.
To export tables from the export server, use this procedure:
Ensure a consistent snapshot by executing this statement to lock the tables so that they cannot be modified during export:
mysql> FLUSH TABLES hr.employees, hr.managers WITH READ LOCK;
While the lock is in effect, the tables can still be used, but only for read access.
At the file system level, copy the .sdi
and table content files from the hr schema
directory to the secure export directory:
The .sdi file is located in the
hr schema directory, but might not have
exactly the same basename as the table name. For example,
the .sdi files for the
employees and
managers tables might be named
employees_125.sdi and
managers_238.sdi.
For a MyISAM table, the content files
are its .MYD data file and
.MYI index file.
Given those file names, the copy commands look like this:
shell>cdshell>export_basedir/data/hrcp employees_125.sdi /tmp/exportshell>cp managers_238.sdi /tmp/exportshell>cp employees.{MYD,MYI} /tmp/exportshell>cp managers.{MYD,MYI} /tmp/export
Unlock the tables:
mysql> UNLOCK TABLES;
To import tables into the import server, use this procedure:
The import schema must exist. If necessary, execute this statement to create it:
mysql> CREATE SCHEMA hr;
At the file system level, copy the .sdi
files to the import server
secure_file_priv directory,
/tmp/mysql-files. Also, copy the table
content files to the hr schema directory:
shell>cd /tmp/exportshell>cp employees_125.sdi /tmp/mysql-filesshell>cp managers_238.sdi /tmp/mysql-filesshell>cp employees.{MYD,MYI}shell>import_basedir/data/hrcp managers.{MYD,MYI}import_basedir/data/hr
Import the tables by executing an IMPORT
TABLE statement that names the
.sdi files:
mysql>IMPORT TABLE FROM'/tmp/mysql-files/employees.sdi','/tmp/mysql-files/managers.sdi';
The .sdi file need not be placed in the
import server directory named by the
secure_file_priv system variable
if that variable is empty; it can be in any directory accessible
to the server, including the schema directory for the imported
table. If the .sdi file is placed in that
directory, however, it may be rewritten; the import operation
creates a new .sdi file for the table, which
will overwrite the old .sdi file if the
operation uses the same file name for the new file.
Each sdi_file value must be a string
literal that names the .sdi file for a table
or is a pattern that matches .sdi files. If
the string is a pattern, any leading directory path and the
.sdi file name suffix must be given
literally. Pattern characters are permitted only in the base name
part of the file name:
? matches any single character
* matches any sequence of characters,
including no characters
Using a pattern, the previous IMPORT
TABLE statement could have been written like this
(assuming that the /tmp/mysql-files directory
contains no other .sdi files matching the
pattern):
IMPORT TABLE FROM '/tmp/mysql-files/*.sdi';
To interpret the location of .sdi file path
names, the server uses the same rules for
IMPORT TABLE as the server-side
rules for LOAD DATA (that is, the
non-LOCAL rules). See
Section 13.2.7, “LOAD DATA Statement”, paying particular attention to the
rules used to interpret relative path names.
IMPORT TABLE fails if the
.sdi or table files cannot be located. After
importing a table, the server attempts to open it and reports as
warnings any problems detected. To attempt a repair to correct any
reported issues, use REPAIR TABLE.
IMPORT TABLE is not written to the
binary log.
IMPORT TABLE applies only to
non-TEMPORARY MyISAM
tables. It does not apply to tables created with a transactional
storage engine, tables created with
CREATE TEMPORARY
TABLE, or views.
An .sdi file used in an import operation
must be generated on a server with the same data dictionary
version and sdi version as the import server. The version
information of the generating server is found in the
.sdi file:
{
"mysqld_version_id":80019,
"dd_version":80017,
"sdi_version":80016,
...
}
To determine the data dictionary and sdi version of the import
server, you can check the .sdi file of a
recently created table on the import server.
The table data and index files must be placed in the schema
directory for the import server prior to the import operation,
unless the table as defined on the export server uses the
DATA DIRECTORY or INDEX
DIRECTORY table options. In that case, modify the
import procedure using one of these alternatives before
executing the IMPORT TABLE
statement:
Put the data and index files into the same directory on the import server host as on the export server host, and create symlinks in the import server schema directory to those files.
Put the data and index files into an import server host
directory different from that on the export server host, and
create symlinks in the import server schema directory to
those files. In addition, modify the
.sdi file to reflect the different file
locations.
Put the data and index files into the schema directory on
the import server host, and modify the
.sdi file to remove the data and index
directory table options.
Any collation IDs stored in the .sdi file
must refer to the same collations on the export and import
servers.
Trigger information for a table is not serialized into the table
.sdi file, so triggers are not restored by
the import operation.
Some edits to an .sdi file are permissible
prior to executing the IMPORT
TABLE statement, whereas others are problematic or may
even cause the import operation to fail:
Changing the data directory and index directory table options is required if the locations of the data and index files differ between the export and import servers.
Changing the schema name is required to import the table into a different schema on the import server than on the export server.
Changing schema and table names may be required to
accommodate differences between file system case-sensitivity
semantics on the export and import servers or differences in
lower_case_table_names
settings. Changing the table names in the
.sdi file may require renaming the
table files as well.
In some cases, changes to column definitions are permitted. Changing data types is likely to cause problems.
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[(col_name [, col_name] ...)]
{ {VALUES | VALUE} (value_list) [, (value_list)] ...
|
VALUES row_constructor_list
}
[AS row_alias[(col_alias [, col_alias] ...)]]
[ON DUPLICATE KEY UPDATE assignment_list]
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[AS row_alias[(col_alias [, col_alias] ...)]]
SET assignment_list
[ON DUPLICATE KEY UPDATE assignment_list]
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[(col_name [, col_name] ...)]
[AS row_alias[(col_alias [, col_alias] ...)]]
{SELECT ... | TABLE table_name}
[ON DUPLICATE KEY UPDATE assignment_list]
value:
{expr | DEFAULT}
value_list:
value [, value] ...
row_constructor_list:
ROW(value_list)[, ROW(value_list)][, ...]
assignment:
col_name = [row_alias.]value
assignment_list:
assignment [, assignment] ...
INSERT inserts new rows into an
existing table. The INSERT
... VALUES,
INSERT ... VALUES
ROW(), and
INSERT ... SET
forms of the statement insert rows based on explicitly specified
values. The INSERT
... SELECT form inserts rows selected from another table
or tables. You can also use
INSERT ... TABLE
in MySQL 8.0.19 and later to insert rows from a single table.
INSERT with an ON
DUPLICATE KEY UPDATE clause enables existing rows to be
updated if a row to be inserted would cause a duplicate value in a
UNIQUE index or PRIMARY KEY.
In MySQL 8.0.19 and later, a row alias with one or more optional
column alises can be used with ON DUPLICATE KEY
UPDATE to refer to the row to be inserted.
For additional information about
INSERT ...
SELECT and
INSERT ... ON
DUPLICATE KEY UPDATE, see
Section 13.2.6.1, “INSERT ... SELECT Statement”, and
Section 13.2.6.2, “INSERT ... ON DUPLICATE KEY UPDATE Statement”.
In MySQL 8.0, the DELAYED keyword
is accepted but ignored by the server. For the reasons for this,
see Section 13.2.6.3, “INSERT DELAYED Statement”,
Inserting into a table requires the
INSERT privilege for the table. If
the ON DUPLICATE KEY UPDATE clause is used and
a duplicate key causes an UPDATE to
be performed instead, the statement requires the
UPDATE privilege for the columns to
be updated. For columns that are read but not modified you need
only the SELECT privilege (such as
for a column referenced only on the right hand side of an
col_name=expr
assignment in an ON DUPLICATE KEY UPDATE
clause).
When inserting into a partitioned table, you can control which
partitions and subpartitions accept new rows. The
PARTITION option takes a list of the
comma-separated names of one or more partitions or subpartitions
(or both) of the table. If any of the rows to be inserted by a
given INSERT statement do not match
one of the partitions listed, the
INSERT statement fails with the
error Found a row not matching the given partition
set. For more information and examples, see
Section 23.5, “Partition Selection”.
You can use REPLACE instead of
INSERT to overwrite old rows.
REPLACE is the counterpart to
INSERT IGNORE in
the treatment of new rows that contain unique key values that
duplicate old rows: The new rows replace the old rows rather than
being discarded. See Section 13.2.9, “REPLACE Statement”.
tbl_name is the table into which rows
should be inserted. Specify the columns for which the statement
provides values as follows:
Provide a parenthesized list of comma-separated column names
following the table name. In this case, a value for each named
column must be provided by the VALUES list,
VALUES ROW()
list, or SELECT statement. For
the INSERT TABLE form, the number of
columns in the source table must match the number of columns
to be inserted.
If you do not specify a list of column names for
INSERT ...
VALUES or
INSERT ...
SELECT, values for every column in the table must be
provided by the VALUES list,
SELECT statement, or
TABLE statement. If you do not
know the order of the columns in the table, use
DESCRIBE
to find out.
tbl_name
A SET clause indicates columns explicitly
by name, together with the value to assign each one.
Column values can be given in several ways:
If strict SQL mode is not enabled, any column not explicitly given a value is set to its default (explicit or implicit) value. For example, if you specify a column list that does not name all the columns in the table, unnamed columns are set to their default values. Default value assignment is described in Section 11.6, “Data Type Default Values”. See also Section 1.8.3.3, “Enforced Constraints on Invalid Data”.
If strict SQL mode is enabled, an
INSERT statement generates an
error if it does not specify an explicit value for every
column that has no default value. See
Section 5.1.11, “Server SQL Modes”.
If both the column list and the VALUES list
are empty, INSERT creates a row
with each column set to its default value:
INSERT INTO tbl_name () VALUES();
If strict mode is not enabled, MySQL uses the implicit default value for any column that has no explicitly defined default. If strict mode is enabled, an error occurs if any column has no default value.
Use the keyword DEFAULT to set a column
explicitly to its default value. This makes it easier to write
INSERT statements that assign
values to all but a few columns, because it enables you to
avoid writing an incomplete VALUES list
that does not include a value for each column in the table.
Otherwise, you must provide the list of column names
corresponding to each value in the VALUES
list.
If a generated column is inserted into explicitly, the only
permitted value is DEFAULT. For information
about generated columns, see
Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
In expressions, you can use
DEFAULT(
to produce the default value for column
col_name)col_name.
Type conversion of an expression
expr that provides a column value
might occur if the expression data type does not match the
column data type. Conversion of a given value can result in
different inserted values depending on the column type. For
example, inserting the string '1999.0e-2'
into an INT,
FLOAT,
DECIMAL(10,6), or
YEAR column inserts the value
1999, 19.9921,
19.992100, or 1999,
respectively. The value stored in the
INT and
YEAR columns is
1999 because the string-to-number
conversion looks only at as much of the initial part of the
string as may be considered a valid integer or year. For the
FLOAT and
DECIMAL columns, the
string-to-number conversion considers the entire string a
valid numeric value.
An expression expr can refer to any
column that was set earlier in a value list. For example, you
can do this because the value for col2
refers to col1, which has previously been
assigned:
INSERT INTO tbl_name (col1,col2) VALUES(15,col1*2);
But the following is not legal, because the value for
col1 refers to col2,
which is assigned after col1:
INSERT INTO tbl_name (col1,col2) VALUES(col2*2,15);
An exception occurs for columns that contain
AUTO_INCREMENT values. Because
AUTO_INCREMENT values are generated after
other value assignments, any reference to an
AUTO_INCREMENT column in the assignment
returns a 0.
INSERT statements that use
VALUES syntax can insert multiple rows. To do
this, include multiple lists of comma-separated column values,
with lists enclosed within parentheses and separated by commas.
Example:
INSERT INTO tbl_name (a,b,c)
VALUES(1,2,3), (4,5,6), (7,8,9);
Each values list must contain exactly as many values as are to be inserted per row. The following statement is invalid because it contains one list of nine values, rather than three lists of three values each:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3,4,5,6,7,8,9);
VALUE is a synonym for
VALUES in this context. Neither implies
anything about the number of values lists, nor about the number of
values per list. Either may be used whether there is a single
values list or multiple lists, and regardless of the number of
values per list.
INSERT statements using
VALUES ROW()
syntax can also insert multiple rows. In this case, each value
list must be contained within a ROW() (row
constructor), like this:
INSERT INTO tbl_name (a,b,c)
VALUES ROW(1,2,3), ROW(4,5,6), ROW(7,8,9);
The affected-rows value for an
INSERT can be obtained using the
ROW_COUNT() SQL function or the
mysql_affected_rows() C API
function. See Section 12.15, “Information Functions”, and
Section 28.7.6.1, “mysql_affected_rows()”.
If you use INSERT ...
VALUES or INSERT ... VALUES ROW()
with multiple value lists, or
INSERT ...
SELECT or INSERT ... TABLE, the
statement returns an information string in this format:
Records:N1Duplicates:N2Warnings:N3
If you are using the C API, the information string can be obtained
by invoking the mysql_info()
function. See Section 28.7.6.36, “mysql_info()”.
Records indicates the number of rows processed
by the statement. (This is not necessarily the number of rows
actually inserted because Duplicates can be
nonzero.) Duplicates indicates the number of
rows that could not be inserted because they would duplicate some
existing unique index value. Warnings indicates
the number of attempts to insert column values that were
problematic in some way. Warnings can occur under any of the
following conditions:
Inserting NULL into a column that has been
declared NOT NULL. For multiple-row
INSERT statements or
INSERT INTO ...
SELECT statements, the column is set to the implicit
default value for the column data type. This is
0 for numeric types, the empty string
('') for string types, and the
“zero” value for date and time types.
INSERT INTO ...
SELECT statements are handled the same way as
multiple-row inserts because the server does not examine the
result set from the SELECT to
see whether it returns a single row. (For a single-row
INSERT, no warning occurs when
NULL is inserted into a NOT
NULL column. Instead, the statement fails with an
error.)
Setting a numeric column to a value that lies outside the column's range. The value is clipped to the closest endpoint of the range.
Assigning a value such as '10.34 a' to a
numeric column. The trailing nonnumeric text is stripped off
and the remaining numeric part is inserted. If the string
value has no leading numeric part, the column is set to
0.
Inserting a string into a string column
(CHAR,
VARCHAR,
TEXT, or
BLOB) that exceeds the column's
maximum length. The value is truncated to the column's maximum
length.
Inserting a value into a date or time column that is illegal for the data type. The column is set to the appropriate zero value for the type.
For INSERT examples involving
AUTO_INCREMET column values, see
Section 3.6.9, “Using AUTO_INCREMENT”.
If INSERT inserts a row into a
table that has an AUTO_INCREMENT column,
you can find the value used for that column by using the
LAST_INSERT_ID() SQL function
or the mysql_insert_id() C API
function.
These two functions do not always behave identically. The
behavior of INSERT statements
with respect to AUTO_INCREMENT columns is
discussed further in
Section 12.15, “Information Functions”, and
Section 28.7.6.38, “mysql_insert_id()”.
The INSERT statement supports the
following modifiers:
If you use the LOW_PRIORITY modifier,
execution of the INSERT is
delayed until no other clients are reading from the table.
This includes other clients that began reading while existing
clients are reading, and while the INSERT
LOW_PRIORITY statement is waiting. It is possible,
therefore, for a client that issues an INSERT
LOW_PRIORITY statement to wait for a very long time.
LOW_PRIORITY affects only storage engines
that use only table-level locking (such as
MyISAM, MEMORY, and
MERGE).
LOW_PRIORITY should normally not be used
with MyISAM tables because doing so
disables concurrent inserts. See
Section 8.11.3, “Concurrent Inserts”.
If you specify HIGH_PRIORITY, it overrides
the effect of the
--low-priority-updates option
if the server was started with that option. It also causes
concurrent inserts not to be used. See
Section 8.11.3, “Concurrent Inserts”.
HIGH_PRIORITY affects only storage engines
that use only table-level locking (such as
MyISAM, MEMORY, and
MERGE).
If you use the IGNORE modifier, errors that
occur while executing the
INSERT statement are ignored.
For example, without IGNORE, a row that
duplicates an existing UNIQUE index or
PRIMARY KEY value in the table causes a
duplicate-key error and the statement is aborted. With
IGNORE, the row is discarded and no error
occurs. Ignored errors generate warnings instead.
IGNORE has a similar effect on inserts into
partitioned tables where no partition matching a given value
is found. Without IGNORE, such
INSERT statements are aborted
with an error. When
INSERT
IGNORE is used, the insert operation fails silently
for rows containing the unmatched value, but inserts rows that
are matched. For an example, see
Section 23.2.2, “LIST Partitioning”.
Data conversions that would trigger errors abort the statement
if IGNORE is not specified. With
IGNORE, invalid values are adjusted to the
closest values and inserted; warnings are produced but the
statement does not abort. You can determine with the
mysql_info() C API function
how many rows were actually inserted into the table.
For more information, see Comparison of the IGNORE Keyword and Strict SQL Mode.
If you specify ON DUPLICATE KEY UPDATE, and
a row is inserted that would cause a duplicate value in a
UNIQUE index or PRIMARY
KEY, an UPDATE of the
old row occurs. The affected-rows value per row is 1 if the
row is inserted as a new row, 2 if an existing row is updated,
and 0 if an existing row is set to its current values. If you
specify the CLIENT_FOUND_ROWS flag to the
mysql_real_connect() C API
function when connecting to mysqld, the
affected-rows value is 1 (not 0) if an existing row is set to
its current values. See Section 13.2.6.2, “INSERT ... ON DUPLICATE KEY UPDATE Statement”.
INSERT DELAYED was deprecated
in MySQL 5.6, and is scheduled for eventual removal. In MySQL
8.0, the DELAYED modifier is
accepted but ignored. Use INSERT (without
DELAYED) instead. See
Section 13.2.6.3, “INSERT DELAYED Statement”.
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[(col_name [, col_name] ...)]
{SELECT ... | TABLE table_name}
[ON DUPLICATE KEY UPDATE assignment_list]
value:
{expr | DEFAULT}
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
With INSERT ...
SELECT, you can quickly insert many rows into a table
from the result of a SELECT
statement, which can select from one or many tables. For
example:
INSERT INTO tbl_temp2 (fld_id) SELECT tbl_temp1.fld_order_id FROM tbl_temp1 WHERE tbl_temp1.fld_order_id > 100;
Beginning with MySQL 8.0.19, you can use a
TABLE statement in place of
SELECT, as shown here:
INSERT INTO ta TABLE tb;
TABLE tb is equivalent to SELECT *
FROM tb. It can be useful when inserting all columns
from the source table into the target table, and no filtering
with WHERE is required. In addition, the rows from
TABLE can be ordered by one or
more columns using ORDER BY, and the number
of rows inserted can be limited using a LIMIT
clause. For more information, see Section 13.2.12, “TABLE Statement”.
The following conditions hold for
INSERT ...
SELECT statements, and, except where noted, for
INSERT ... TABLE as well:
Specify IGNORE to ignore rows that would
cause duplicate-key violations.
The target table of the
INSERT statement may appear
in the FROM clause of the
SELECT part of the query, or
as the table named by TABLE.
However, you cannot insert into a table and select from the
same table in a subquery.
When selecting from and inserting into the same table, MySQL
creates an internal temporary table to hold the rows from
the SELECT and then inserts
those rows into the target table. However, you cannot use
INSERT INTO t ... SELECT ... FROM t when
t is a TEMPORARY
table, because TEMPORARY tables cannot be
referred to twice in the same statement. For the same
reason, you cannot use INSERT INTO t ... TABLE
t when t is a temporary table.
See Section 8.4.4, “Internal Temporary Table Use in MySQL”, and
Section B.4.6.2, “TEMPORARY Table Problems”.
AUTO_INCREMENT columns work as usual.
To ensure that the binary log can be used to re-create the
original tables, MySQL does not permit concurrent inserts
for INSERT
... SELECT or INSERT ... TABLE
statements (see Section 8.11.3, “Concurrent Inserts”).
To avoid ambiguous column reference problems when the
SELECT and the
INSERT refer to the same
table, provide a unique alias for each table used in the
SELECT part, and qualify
column names in that part with the appropriate alias.
The TABLE statement does not
support aliases.
You can explicitly select which partitions or subpartitions (or
both) of the source or target table (or both) are to be used
with a PARTITION option following the name of
the table. When PARTITION is used with the
name of the source table in the
SELECT portion of the statement,
rows are selected only from the partitions or subpartitions
named in its partition list. When PARTITION
is used with the name of the target table for the
INSERT portion of the statement,
it must be possible to insert all rows selected into the
partitions or subpartitions named in the partition list
following the option. Otherwise, the INSERT ...
SELECT statement fails. For more information and
examples, see Section 23.5, “Partition Selection”.
TABLE does not support a
PARTITION option.
For INSERT
... SELECT statements, see
Section 13.2.6.2, “INSERT ... ON DUPLICATE KEY UPDATE Statement” for conditions under which
the SELECT columns can be
referred to in an ON DUPLICATE KEY UPDATE
clause. This also works for INSERT ... TABLE.
The order in which a SELECT or
TABLE statement with no
ORDER BY clause returns rows is
nondeterministic. This means that, when using replication, there
is no guarantee that such a
SELECT returns rows in the same
order on the master and the slave, which can lead to
inconsistencies between them. To prevent this from occurring,
always write INSERT ... SELECT or
INSERT ... TABLE statements that are to be
replicated using an ORDER BY clause that
produces the same row order on the master and the slave. See
also Section 17.5.1.18, “Replication and LIMIT”.
Due to this issue,
INSERT ...
SELECT ON DUPLICATE KEY UPDATE and
INSERT IGNORE ...
SELECT statements are flagged as unsafe for
statement-based replication. Such statements produce a warning
in the error log when using statement-based mode and are written
to the binary log using the row-based format when using
MIXED mode. (Bug #11758262, Bug #50439)
See also Section 17.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.
If you specify an ON DUPLICATE KEY UPDATE
clause and a row to be inserted would cause a duplicate value in
a UNIQUE index or PRIMARY
KEY, an UPDATE of the
old row occurs. For example, if column a is
declared as UNIQUE and contains the value
1, the following two statements have similar
effect:
INSERT INTO t1 (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=c+1; UPDATE t1 SET c=c+1 WHERE a=1;
(The effects are not identical for an InnoDB
table where a is an auto-increment column.
With an auto-increment column, an INSERT
statement increases the auto-increment value but
UPDATE does not.)
If column b is also unique, the
INSERT is equivalent to this
UPDATE statement instead:
UPDATE t1 SET c=c+1 WHERE a=1 OR b=2 LIMIT 1;
If a=1 OR b=2 matches several rows, only
one row is updated. In general, you should
try to avoid using an ON DUPLICATE KEY UPDATE
clause on tables with multiple unique indexes.
With ON DUPLICATE KEY UPDATE, the
affected-rows value per row is 1 if the row is inserted as a new
row, 2 if an existing row is updated, and 0 if an existing row
is set to its current values. If you specify the
CLIENT_FOUND_ROWS flag to the
mysql_real_connect() C API
function when connecting to mysqld, the
affected-rows value is 1 (not 0) if an existing row is set to
its current values.
If a table contains an AUTO_INCREMENT column
and INSERT
... ON DUPLICATE KEY UPDATE inserts or updates a row,
the LAST_INSERT_ID() function
returns the AUTO_INCREMENT value.
The ON DUPLICATE KEY UPDATE clause can
contain multiple column assignments, separated by commas.
In assignment value expressions in the ON DUPLICATE KEY
UPDATE clause, you can use the
VALUES(
function to refer to column values from the
col_name)INSERT portion of the
INSERT ...
ON DUPLICATE KEY UPDATE statement. In other words,
VALUES(
in the col_name)ON DUPLICATE KEY UPDATE clause refers
to the value of col_name that would
be inserted, had no duplicate-key conflict occurred. This
function is especially useful in multiple-row inserts. The
VALUES() function is meaningful
only in the ON DUPLICATE KEY UPDATE clause or
INSERT statements and returns
NULL otherwise. Example:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6) ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
That statement is identical to the following two statements:
INSERT INTO t1 (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=3; INSERT INTO t1 (a,b,c) VALUES (4,5,6) ON DUPLICATE KEY UPDATE c=9;
The use of VALUES() to refer to
the new row and columns is deprecated beginning with MySQL
8.0.20, and is subject to removal in a future version of
MySQL. Instead, use row and column aliases, as described in
the next few paragraphs of this section.
Beginning with MySQL 8.0.19, it is possible to use an alias for
the row, with, optionally, one or more of its columns to be
inserted, following the VALUES or
SET clause, and preceded by the
AS keyword. Using the row alias
new, the statement shown previously using
VALUES() to access the new column values can
be written in the form shown here:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6) AS new ON DUPLICATE KEY UPDATE c = new.a+new.b;
If, in addition, you use the column aliases
m, n, and
p, you can omit the row alias in the
assignment clause and write the same statement like this:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6) AS new(m,n,p) ON DUPLICATE KEY UPDATE c = m+n;
When using column aliases in this fashion, you must still use a
row alias following the VALUES clause, even
if you do not make direct use of it in the assignment clause.
You can also use row and column aliases with a
SET clause, as mentioned previously.
Employing SET instead of
VALUES in the two INSERT ... ON
DUPLICATE KEY UPDATE statements just shown can be done
as shown here:
INSERT INTO t1 SET a=1,b=2,c=3 AS new ON DUPLICATE KEY UPDATE c = new.a+new.b; INSERT INTO t1 SET a=1,b=2,c=3 AS new(m,n,p) ON DUPLICATE KEY UPDATE c = m+n;
The row alias must not be the same as the name of the table. If
column aliases are not used, or if they are the same as the
column names, they must be distinguished using the row alias in
the ON DUPLICATE KEY UPDATE clause. Column
aliases must be unique with regard to the row alias to which
they apply (that is, no column aliases referring to columns of
the same row may be the same).
For INSERT
... SELECT statements, these rules apply regarding
acceptable forms of SELECT query expressions
that you can refer to in an ON DUPLICATE KEY
UPDATE clause:
References to columns from queries on a single table, which may be a derived table.
References to columns from queries on a join over multiple tables.
References to columns from DISTINCT
queries.
References to columns in other tables, as long as the
SELECT does not use
GROUP BY. One side effect is that you
must qualify references to nonunique column names.
References to columns from a
UNION are not supported. To work
around this restriction, rewrite the
UNION as a derived table so that
its rows can be treated as a single-table result set. For
example, this statement produces an error:
INSERT INTO t1 (a, b) SELECT c, d FROM t2 UNION SELECT e, f FROM t3 ON DUPLICATE KEY UPDATE b = b + c;
Instead, use an equivalent statement that rewrites the
UNION as a derived table:
INSERT INTO t1 (a, b) SELECT * FROM (SELECT c, d FROM t2 UNION SELECT e, f FROM t3) AS dt ON DUPLICATE KEY UPDATE b = b + c;
The technique of rewriting a query as a derived table also
enables references to columns from GROUP BY
queries.
Because the results of
INSERT ...
SELECT statements depend on the ordering of rows from
the SELECT and this order cannot
always be guaranteed, it is possible when logging
INSERT ...
SELECT ON DUPLICATE KEY UPDATE statements for the
master and the slave to diverge. Thus,
INSERT ...
SELECT ON DUPLICATE KEY UPDATE statements are flagged
as unsafe for statement-based replication. Such statements
produce a warning in the error log when using statement-based
mode and are written to the binary log using the row-based
format when using MIXED mode. An
INSERT ...
ON DUPLICATE KEY UPDATE statement against a table
having more than one unique or primary key is also marked as
unsafe. (Bug #11765650, Bug #58637)
See also Section 17.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.
INSERT DELAYED ...
The DELAYED option for the
INSERT statement is a MySQL
extension to standard SQL. In previous versions of MySQL, it can
be used for certain kinds of tables (such as
MyISAM), such that when a client uses
INSERT DELAYED, it gets an okay
from the server at once, and the row is queued to be inserted
when the table is not in use by any other thread.
DELAYED inserts and replaces were deprecated
in MySQL 5.6. In MySQL 8.0,
DELAYED is not supported. The server
recognizes but ignores the DELAYED keyword,
handles the insert as a nondelayed insert, and generates an
ER_WARN_LEGACY_SYNTAX_CONVERTED warning
(“INSERT DELAYED is no longer supported. The statement was
converted to INSERT”). The DELAYED
keyword is scheduled for removal in a future release.
LOAD DATA
[LOW_PRIORITY | CONCURRENT] [LOCAL]
INFILE 'file_name'
[REPLACE | IGNORE]
INTO TABLE tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[CHARACTER SET charset_name]
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
[IGNORE number {LINES | ROWS}]
[(col_name_or_user_var
[, col_name_or_user_var] ...)]
[SET col_name={expr | DEFAULT}
[, col_name={expr | DEFAULT}] ...]
The LOAD DATA statement reads rows
from a text file into a table at a very high speed.
LOAD DATA is the complement of
SELECT ... INTO
OUTFILE. (See Section 13.2.10.1, “SELECT ... INTO Statement”.) To write
data from a table to a file, use
SELECT ... INTO
OUTFILE. To read the file back into a table, use
LOAD DATA. The syntax of the
FIELDS and LINES clauses is
the same for both statements.
You can also load data files by using the
mysqlimport utility; see
Section 4.5.5, “mysqlimport — A Data Import Program”. mysqlimport
operates by sending a LOAD DATA
statement to the server.
For more information about the efficiency of
INSERT versus
LOAD DATA and speeding up
LOAD DATA, see
Section 8.2.5.1, “Optimizing INSERT Statements”.
LOAD DATA supports explicit
partition selection using the PARTITION
option with a list of one or more comma-separated names of
partitions, subpartitions, or both. When this option is used, if
any rows from the file cannot be inserted into any of the
partitions or subpartitions named in the list, the statement
fails with the error Found a row not matching the
given partition set. For more information and
examples, see Section 23.5, “Partition Selection”.
The file name must be given as a literal string. On Windows,
specify backslashes in path names as forward slashes or doubled
backslashes. The
character_set_filesystem system
variable controls the interpretation of the file name character
set.
The server uses the character set indicated by the
character_set_database system
variable to interpret the information in the file.
SET NAMES and the setting of
character_set_client do not
affect interpretation of input. If the contents of the input
file use a character set that differs from the default, it is
usually preferable to specify the character set of the file by
using the CHARACTER SET clause. A character
set of binary specifies “no
conversion.”
LOAD DATA interprets all fields
in the file as having the same character set, regardless of the
data types of the columns into which field values are loaded.
For proper interpretation of file contents, you must ensure that
it was written with the correct character set. For example, if
you write a data file with mysqldump -T or by
issuing a SELECT
... INTO OUTFILE statement in
mysql, be sure to use a
--default-character-set option so that output
is written in the character set to be used when the file is
loaded with LOAD DATA.
It is not possible to load data files that use the
ucs2, utf16,
utf16le, or utf32
character set.
The LOCAL modifier affects expected location
of the file and error handling, as described later.
LOCAL works only if your server and your
client both have been configured to permit it. For example, if
mysqld was started with the
local_infile system variable
disabled, LOCAL does not work. See
Section 6.1.6, “Security Considerations for LOAD DATA LOCAL”.
The LOCAL modifier affects where the file is
expected to be found:
If LOCAL is specified, the file is read
by the client program on the client host and sent to the
server. The file can be given as a full path name to specify
its exact location. If given as a relative path name, the
name is interpreted relative to the directory in which the
client program was started.
When using LOCAL with
LOAD DATA, a copy of the file
is created in the directory where the MySQL server stores
temporary files. See Section B.4.3.5, “Where MySQL Stores Temporary Files”. Lack
of sufficient space for the copy in this directory can cause
the LOAD DATA
LOCAL statement to fail.
If LOCAL is not specified, the file must
be located on the server host and is read directly by the
server. The server uses the following rules to locate the
file:
If the file name is an absolute path name, the server uses it as given.
If the file name is a relative path name with one or more leading components, the server searches for the file relative to the server's data directory.
If a file name with no leading components is given, the server looks for the file in the database directory of the default database.
In the non-LOCAL case, these rules mean that
a file named as ./myfile.txt is read from
the server's data directory, whereas the file named as
myfile.txt is read from the database
directory of the default database. For example, if
db1 is the default database, the following
LOAD DATA statement reads the
file data.txt from the database directory
for db1, even though the statement explicitly
loads the file into a table in the db2
database:
LOAD DATA INFILE 'data.txt' INTO TABLE db2.my_table;
The server also uses the non-LOCAL rules to
locate .sdi files for the
IMPORT TABLE statement.
Non-LOCAL load operations read text files
located on the server. For security reasons, such operations
require that you have the FILE
privilege. See Section 6.2.2, “Privileges Provided by MySQL”. Also,
non-LOCAL load operations are subject to the
secure_file_priv system
variable setting. If the variable value is a nonempty directory
name, the file to be loaded must be located in that directory.
If the variable value is empty (which is insecure), the file
need only be readable by the server.
Using LOCAL is a bit slower than letting the
server access the files directly, because the file contents must
be sent over the connection by the client to the server. On the
other hand, you do not need the
FILE privilege to load local
files.
LOCAL also affects error handling:
With LOAD DATA,
data-interpretation and duplicate-key errors terminate the
operation.
With LOAD DATA
LOCAL, data-interpretation and duplicate-key
errors become warnings and the operation continues because
the server has no way to stop transmission of the file in
the middle of the operation. For duplicate-key errors, this
is the same as if IGNORE is specified.
IGNORE is explained further later in this
section.
LOAD DATA is considered unsafe
for statement-based replication. If you do use
LOAD DATA when
binlog_format=STATEMENT is set,
a temporary file containing the data is created on the
replication slave where the changes are applied. If binary log
encryption is active on the server, note that this temporary
file is not encrypted. When encryption is required, be sure to
use row-based or mixed binary logging format instead, which do
not create the temporary files. For more information on the
interaction between LOAD DATA and
replication, see
Section 17.5.1.19, “Replication and LOAD DATA”.
If you use the LOW_PRIORITY modifier,
execution of the LOAD DATA
statement is delayed until no other clients are reading from the
table. This affects only storage engines that use only
table-level locking (such as MyISAM,
MEMORY, and MERGE).
If you specify the CONCURRENT modifier with a
MyISAM table that satisfies the condition for
concurrent inserts (that is, it contains no free blocks in the
middle), other threads can retrieve data from the table while
LOAD DATA is executing. This
modifier affects the performance of LOAD
DATA a bit, even if no other thread is using the table
at the same time.
The REPLACE and IGNORE
modifiers control handling of input rows that duplicate existing
rows on unique key values:
If you specify REPLACE, input rows
replace existing rows. In other words, rows that have the
same value for a primary key or unique index as an existing
row. See Section 13.2.9, “REPLACE Statement”.
If you specify IGNORE, rows that
duplicate an existing row on a unique key value are
discarded. For more information, see
Comparison of the IGNORE Keyword and Strict SQL Mode.
If you do not specify either modifier, the behavior depends
on whether the LOCAL modifier is
specified. Without LOCAL, an error occurs
when a duplicate key value is found, and the rest of the
text file is ignored. With LOCAL, the
default behavior is the same as if IGNORE
is specified; this is because the server has no way to stop
transmission of the file in the middle of the operation.
To ignore foreign key constraints during the load operation,
execute a SET foreign_key_checks = 0
statement before executing LOAD
DATA.
If you use LOAD DATA on an empty
MyISAM table, all nonunique indexes are
created in a separate batch (as for REPAIR
TABLE). Normally, this makes LOAD
DATA much faster when you have many indexes. In some
extreme cases, you can create the indexes even faster by turning
them off with ALTER
TABLE ... DISABLE KEYS before loading the file into
the table and using
ALTER TABLE ...
ENABLE KEYS to re-create the indexes after loading the
file. See Section 8.2.5.1, “Optimizing INSERT Statements”.
For both the LOAD DATA and
SELECT ... INTO
OUTFILE statements, the syntax of the
FIELDS and LINES clauses
is the same. Both clauses are optional, but
FIELDS must precede LINES
if both are specified.
If you specify a FIELDS clause, each of its
subclauses (TERMINATED BY,
[OPTIONALLY] ENCLOSED BY, and
ESCAPED BY) is also optional, except that you
must specify at least one of them. Arguments to these clauses
are permitted to contain only ASCII characters.
If you specify no FIELDS or
LINES clause, the defaults are the same as if
you had written this:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY ''
Backslash is the MySQL escape character within strings in SQL
statements. Thus, to specify a literal backslash, you must
specify two backslashes for the value to be interpreted as a
single backslash. The escape sequences '\t'
and '\n' specify tab and newline characters,
respectively.
In other words, the defaults cause LOAD
DATA to act as follows when reading input:
Look for line boundaries at newlines.
Do not skip any line prefix.
Break lines into fields at tabs.
Do not expect fields to be enclosed within any quoting characters.
Interpret characters preceded by the escape character
\ as escape sequences. For example,
\t, \n, and
\\ signify tab, newline, and backslash,
respectively. See the discussion of FIELDS ESCAPED
BY later for the full list of escape sequences.
Conversely, the defaults cause
SELECT ... INTO
OUTFILE to act as follows when writing output:
Write tabs between fields.
Do not enclose fields within any quoting characters.
Use \ to escape instances of tab,
newline, or \ that occur within field
values.
Write newlines at the ends of lines.
For a text file generated on a Windows system, proper file
reading might require LINES TERMINATED BY
'\r\n' because Windows programs typically use two
characters as a line terminator. Some programs, such as
WordPad, might use \r as
a line terminator when writing files. To read such files, use
LINES TERMINATED BY '\r'.
If all the input lines have a common prefix that you want to
ignore, you can use LINES STARTING BY
' to skip the
prefix and anything before it. If a line
does not include the prefix, the entire line is skipped. Suppose
that you issue the following statement:
prefix_string'
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
If the data file looks like this:
xxx"abc",1 something xxx"def",2 "ghi",3
The resulting rows will be ("abc",1) and
("def",2). The third row in the file is
skipped because it does not contain the prefix.
The IGNORE option can be used to ignore lines at the start
of the file. For example, you can use number
LINESIGNORE 1
LINES to skip an initial header line containing column
names:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
When you use SELECT
... INTO OUTFILE in tandem with
LOAD DATA to write data from a
database into a file and then read the file back into the
database later, the field- and line-handling options for both
statements must match. Otherwise, LOAD
DATA will not interpret the contents of the file
properly. Suppose that you use
SELECT ... INTO
OUTFILE to write a file with fields delimited by
commas:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ',' FROM table2;
To read the comma-delimited file, the correct statement would be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
If instead you tried to read the file with the statement shown
following, it wouldn't work because it instructs
LOAD DATA to look for tabs
between fields:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY '\t';
The likely result is that each input line would be interpreted as a single field.
LOAD DATA can be used to read
files obtained from external sources. For example, many programs
can export data in comma-separated values (CSV) format, such
that lines have fields separated by commas and enclosed within
double quotation marks, with an initial line of column names. If
the lines in such a file are terminated by carriage
return/newline pairs, the statement shown here illustrates the
field- and line-handling options you would use to load the file:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
If the input values are not necessarily enclosed within
quotation marks, use OPTIONALLY before the
ENCLOSED BY option.
Any of the field- or line-handling options can specify an empty
string (''). If not empty, the
FIELDS [OPTIONALLY] ENCLOSED BY and
FIELDS ESCAPED BY values must be a single
character. The FIELDS TERMINATED BY,
LINES STARTING BY, and LINES
TERMINATED BY values can be more than one character.
For example, to write lines that are terminated by carriage
return/linefeed pairs, or to read a file containing such lines,
specify a LINES TERMINATED BY '\r\n' clause.
To read a file containing jokes that are separated by lines
consisting of %%, you can do this
CREATE TABLE jokes (a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, joke TEXT NOT NULL); LOAD DATA INFILE '/tmp/jokes.txt' INTO TABLE jokes FIELDS TERMINATED BY '' LINES TERMINATED BY '\n%%\n' (joke);
FIELDS [OPTIONALLY] ENCLOSED BY controls
quoting of fields. For output
(SELECT ... INTO
OUTFILE), if you omit the word
OPTIONALLY, all fields are enclosed by the
ENCLOSED BY character. An example of such
output (using a comma as the field delimiter) is shown here:
"1","a string","100.20" "2","a string containing a , comma","102.20" "3","a string containing a \" quote","102.20" "4","a string containing a \", quote and comma","102.20"
If you specify OPTIONALLY, the
ENCLOSED BY character is used only to enclose
values from columns that have a string data type (such as
CHAR,
BINARY,
TEXT, or
ENUM):
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a \" quote",102.20 4,"a string containing a \", quote and comma",102.20
Occurrences of the ENCLOSED BY character
within a field value are escaped by prefixing them with the
ESCAPED BY character. Also, if you specify an
empty ESCAPED BY value, it is possible to
inadvertently generate output that cannot be read properly by
LOAD DATA. For example, the
preceding output just shown would appear as follows if the
escape character is empty. Observe that the second field in the
fourth line contains a comma following the quote, which
(erroneously) appears to terminate the field:
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a " quote",102.20 4,"a string containing a ", quote and comma",102.20
For input, the ENCLOSED BY character, if
present, is stripped from the ends of field values. (This is
true regardless of whether OPTIONALLY is
specified; OPTIONALLY has no effect on input
interpretation.) Occurrences of the ENCLOSED
BY character preceded by the ESCAPED
BY character are interpreted as part of the current
field value.
If the field begins with the ENCLOSED BY
character, instances of that character are recognized as
terminating a field value only if followed by the field or line
TERMINATED BY sequence. To avoid ambiguity,
occurrences of the ENCLOSED BY character
within a field value can be doubled and are interpreted as a
single instance of the character. For example, if
ENCLOSED BY '"' is specified, quotation marks
are handled as shown here:
"The ""BIG"" boss" -> The "BIG" boss The "BIG" boss -> The "BIG" boss The ""BIG"" boss -> The ""BIG"" boss
FIELDS ESCAPED BY controls how to read or
write special characters:
For input, if the FIELDS ESCAPED BY
character is not empty, occurrences of that character are
stripped and the following character is taken literally as
part of a field value. Some two-character sequences that are
exceptions, where the first character is the escape
character. These sequences are shown in the following table
(using \ for the escape character). The
rules for NULL handling are described
later in this section.
For more information about \-escape
syntax, see Section 9.1.1, “String Literals”.
If the FIELDS ESCAPED BY character is
empty, escape-sequence interpretation does not occur.
For output, if the FIELDS ESCAPED BY
character is not empty, it is used to prefix the following
characters on output:
The FIELDS ESCAPED BY character.
The FIELDS [OPTIONALLY] ENCLOSED BY
character.
The first character of the FIELDS TERMINATED
BY and LINES TERMINATED BY
values, if the ENCLOSED BY character
is empty or unspecified.
ASCII 0 (what is actually written
following the escape character is ASCII
0, not a zero-valued byte).
If the FIELDS ESCAPED BY character is
empty, no characters are escaped and NULL
is output as NULL, not
\N. It is probably not a good idea to
specify an empty escape character, particularly if field
values in your data contain any of the characters in the
list just given.
In certain cases, field- and line-handling options interact:
If LINES TERMINATED BY is an empty string
and FIELDS TERMINATED BY is nonempty,
lines are also terminated with FIELDS TERMINATED
BY.
If the FIELDS TERMINATED BY and
FIELDS ENCLOSED BY values are both empty
(''), a fixed-row (nondelimited) format
is used. With fixed-row format, no delimiters are used
between fields (but you can still have a line terminator).
Instead, column values are read and written using a field
width wide enough to hold all values in the field. For
TINYINT,
SMALLINT,
MEDIUMINT,
INT, and
BIGINT, the field widths are
4, 6, 8, 11, and 20, respectively, no matter what the
declared display width is.
LINES TERMINATED BY is still used to
separate lines. If a line does not contain all fields, the
rest of the columns are set to their default values. If you
do not have a line terminator, you should set this to
''. In this case, the text file must
contain all fields for each row.
Fixed-row format also affects handling of
NULL values, as described later.
Fixed-size format does not work if you are using a multibyte character set.
Handling of NULL values varies according to
the FIELDS and LINES
options in use:
For the default FIELDS and
LINES values, NULL is
written as a field value of \N for
output, and a field value of \N is read
as NULL for input (assuming that the
ESCAPED BY character is
\).
If FIELDS ENCLOSED BY is not empty, a
field containing the literal word NULL as
its value is read as a NULL value. This
differs from the word NULL enclosed
within FIELDS ENCLOSED BY characters,
which is read as the string 'NULL'.
If FIELDS ESCAPED BY is empty,
NULL is written as the word
NULL.
With fixed-row format (which is used when FIELDS
TERMINATED BY and FIELDS ENCLOSED
BY are both empty), NULL is
written as an empty string. This causes both
NULL values and empty strings in the
table to be indistinguishable when written to the file
because both are written as empty strings. If you need to be
able to tell the two apart when reading the file back in,
you should not use fixed-row format.
An attempt to load NULL into a NOT
NULL column causes assignment of the implicit default
value for the column's data type and a warning, or an error in
strict SQL mode. Implicit default values are discussed in
Section 11.6, “Data Type Default Values”.
Some cases are not supported by LOAD
DATA:
Fixed-size rows (FIELDS TERMINATED BY and
FIELDS ENCLOSED BY both empty) and
BLOB or
TEXT columns.
If you specify one separator that is the same as or a prefix
of another, LOAD DATA cannot
interpret the input properly. For example, the following
FIELDS clause would cause problems:
FIELDS TERMINATED BY '"' ENCLOSED BY '"'
If FIELDS ESCAPED BY is empty, a field
value that contains an occurrence of FIELDS
ENCLOSED BY or LINES TERMINATED
BY followed by the FIELDS TERMINATED
BY value causes LOAD
DATA to stop reading a field or line too early.
This happens because LOAD
DATA cannot properly determine where the field or
line value ends.
The following example loads all columns of the
persondata table:
LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata;
By default, when no column list is provided at the end of the
LOAD DATA statement, input lines
are expected to contain a field for each table column. If you
want to load only some of a table's columns, specify a column
list:
LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata (col_name_or_user_var[,col_name_or_user_var] ...);
You must also specify a column list if the order of the fields in the input file differs from the order of the columns in the table. Otherwise, MySQL cannot tell how to match input fields with table columns.
Each col_name_or_user_var value is
either a column name or a user variable. With user variables,
the SET clause enables you to perform
preprocessing transformations on their values before assigning
the result to columns.
User variables in the SET clause can be used
in several ways. The following example uses the first input
column directly for the value of t1.column1,
and assigns the second input column to a user variable that is
subjected to a division operation before being used for the
value of t1.column2:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, @var1) SET column2 = @var1/100;
The SET clause can be used to supply values
not derived from the input file. The following statement sets
column3 to the current date and time:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, column2) SET column3 = CURRENT_TIMESTAMP;
You can also discard an input value by assigning it to a user variable and not assigning the variable to a table column:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, @dummy, column2, @dummy, column3);
Use of the column/variable list and SET
clause is subject to the following restrictions:
Assignments in the SET clause should have
only column names on the left hand side of assignment
operators.
You can use subqueries in the right hand side of
SET assignments. A subquery that returns
a value to be assigned to a column may be a scalar subquery
only. Also, you cannot use a subquery to select from the
table that is being loaded.
Lines ignored by an IGNORE clause are not
processed for the column/variable list or
SET clause.
User variables cannot be used when loading data with fixed-row format because user variables do not have a display width.
When processing an input line, LOAD
DATA splits it into fields and uses the values
according to the column/variable list and the
SET clause, if they are present. Then the
resulting row is inserted into the table. If there are
BEFORE INSERT or AFTER
INSERT triggers for the table, they are activated
before or after inserting the row, respectively.
If an input line has too many fields, the extra fields are ignored and the number of warnings is incremented.
If an input line has too few fields, the table columns for which input fields are missing are set to their default values. Default value assignment is described in Section 11.6, “Data Type Default Values”.
An empty field value is interpreted different from a missing field:
For string types, the column is set to the empty string.
For numeric types, the column is set to
0.
For date and time types, the column is set to the appropriate “zero” value for the type. See Section 11.2, “Date and Time Data Types”.
These are the same values that result if you assign an empty
string explicitly to a string, numeric, or date or time type
explicitly in an INSERT or
UPDATE statement.
Treatment of empty or incorrect field values differs from that
just described if the SQL mode is set to a restrictive value.
For example, if sql_mode is set
to TRADITIONAL, conversion of
an empty value or a value such as 'x' for a
numeric column results in an error, not conversion to 0. (With
LOCAL or IGNORE, warnings
occur rather than errors, even with a restrictive
sql_mode value, and the row is
inserted using the same closest-value behavior used for
nonrestrictive SQL modes. This occurs because the server has no
way to stop transmission of the file in the middle of the
operation.)
TIMESTAMP columns are set to the
current date and time only if there is a NULL
value for the column (that is, \N) and the
column is not declared to permit NULL values,
or if the TIMESTAMP column's
default value is the current timestamp and it is omitted from
the field list when a field list is specified.
LOAD DATA regards all input as
strings, so you cannot use numeric values for
ENUM or
SET columns the way you can with
INSERT statements. All
ENUM and
SET values must be specified as
strings.
BIT values cannot be loaded
directly using binary notation (for example,
b'011010'). To work around this, use the
SET clause to strip off the leading
b' and trailing ' and
perform a base-2 to base-10 conversion so that MySQL loads the
values into the BIT column
properly:
shell>cat /tmp/bit_test.txtb'10' b'1111111' shell>mysql testmysql>LOAD DATA INFILE '/tmp/bit_test.txt'INTO TABLE bit_test (@var1)SET b = CAST(CONV(MID(@var1, 3, LENGTH(@var1)-3), 2, 10) AS UNSIGNED);Query OK, 2 rows affected (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT BIN(b+0) FROM bit_test;+----------+ | BIN(b+0) | +----------+ | 10 | | 1111111 | +----------+ 2 rows in set (0.00 sec)
For BIT values in
0b binary notation (for example,
0b011010), use this SET
clause instead to strip off the leading 0b:
SET b = CAST(CONV(MID(@var1, 3, LENGTH(@var1)-2), 2, 10) AS UNSIGNED)
When the LOAD DATA statement
finishes, it returns an information string in the following
format:
Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
Warnings occur under the same circumstances as when values are
inserted using the INSERT
statement (see Section 13.2.6, “INSERT Statement”), except that
LOAD DATA also generates warnings
when there are too few or too many fields in the input row.
You can use SHOW WARNINGS to get
a list of the first
max_error_count warnings as
information about what went wrong. See
Section 13.7.7.40, “SHOW WARNINGS Statement”.
If you are using the C API, you can get information about the
statement by calling the
mysql_info() function. See
Section 28.7.6.36, “mysql_info()”.
On Unix, if you need LOAD DATA to
read from a pipe, you can use the following technique (the
example loads a listing of the / directory
into the table db1.t1):
mkfifo /mysql/data/db1/ls.dat chmod 666 /mysql/data/db1/ls.dat find / -ls > /mysql/data/db1/ls.dat & mysql -e "LOAD DATA INFILE 'ls.dat' INTO TABLE t1" db1
Here you must run the command that generates the data to be loaded and the mysql commands either on separate terminals, or run the data generation process in the background (as shown in the preceding example). If you do not do this, the pipe will block until data is read by the mysql process.
LOAD XML
[LOW_PRIORITY | CONCURRENT] [LOCAL]
INFILE 'file_name'
[REPLACE | IGNORE]
INTO TABLE [db_name.]tbl_name
[CHARACTER SET charset_name]
[ROWS IDENTIFIED BY '<tagname>']
[IGNORE number {LINES | ROWS}]
[(field_name_or_user_var
[, field_name_or_user_var] ...)]
[SET col_name={expr | DEFAULT},
[, col_name={expr | DEFAULT}] ...]
The LOAD XML statement reads data
from an XML file into a table. The
file_name must be given as a literal
string. The tagname in the optional
ROWS IDENTIFIED BY clause must also be given as
a literal string, and must be surrounded by angle brackets
(< and >).
LOAD XML acts as the complement of
running the mysql client in XML output mode
(that is, starting the client with the
--xml option). To write data from a
table to an XML file, you can invoke the mysql
client with the --xml and
-e options from
the system shell, as shown here:
shell> mysql --xml -e 'SELECT * FROM mydb.mytable' > file.xml
To read the file back into a table, use LOAD
XML. By default, the <row>
element is considered to be the equivalent of a database table
row; this can be changed using the ROWS IDENTIFIED
BY clause.
This statement supports three different XML formats:
Column names as attributes and column values as attribute values:
<rowcolumn1="value1"column2="value2" .../>
Column names as tags and column values as the content of these tags:
<row> <column1>value1</column1> <column2>value2</column2> </row>
Column names are the name attributes of
<field> tags, and values are the
contents of these tags:
<row> <field name='column1'>value1</field> <field name='column2'>value2</field> </row>
This is the format used by other MySQL tools, such as mysqldump.
All three formats can be used in the same XML file; the import routine automatically detects the format for each row and interprets it correctly. Tags are matched based on the tag or attribute name and the column name.
Prior to MySQL 8.0.21, LOAD XML did not support
CDATA sections in the source XML. (Bug
#30753708, Bug #98199)
The following clauses work essentially the same way for
LOAD XML as they do for
LOAD DATA:
LOW_PRIORITY or
CONCURRENT
LOCAL
REPLACE or IGNORE
CHARACTER SET
SET
See Section 13.2.7, “LOAD DATA Statement”, for more information about these clauses.
( is a list of one or more comma-separated XML fields
or user variables. The name of a user variable used for this
purpose must match the name of a field from the XML file, prefixed
with field_name_or_user_var,
...)@. You can use field names to select only
desired fields. User variables can be employed to store the
corresponding field values for subsequent re-use.
The IGNORE or number
LINESIGNORE
clause causes the
first number ROWSnumber rows in the XML file to be
skipped. It is analogous to the LOAD
DATA statement's IGNORE ... LINES
clause.
Suppose that we have a table named person,
created as shown here:
USE test;
CREATE TABLE person (
person_id INT NOT NULL PRIMARY KEY,
fname VARCHAR(40) NULL,
lname VARCHAR(40) NULL,
created TIMESTAMP
);Suppose further that this table is initially empty.
Now suppose that we have a simple XML file
person.xml, whose contents are as shown here:
<list>
<person person_id="1" fname="Kapek" lname="Sainnouine"/>
<person person_id="2" fname="Sajon" lname="Rondela"/>
<person person_id="3"><fname>Likame</fname><lname>Örrtmons</lname></person>
<person person_id="4"><fname>Slar</fname><lname>Manlanth</lname></person>
<person><field name="person_id">5</field><field name="fname">Stoma</field>
<field name="lname">Milu</field></person>
<person><field name="person_id">6</field><field name="fname">Nirtam</field>
<field name="lname">Sklöd</field></person>
<person person_id="7"><fname>Sungam</fname><lname>Dulbåd</lname></person>
<person person_id="8" fname="Sraref" lname="Encmelt"/>
</list>Each of the permissible XML formats discussed previously is represented in this example file.
To import the data in person.xml into the
person table, you can use this statement:
mysql>LOAD XML LOCAL INFILE 'person.xml'->INTO TABLE person->ROWS IDENTIFIED BY '<person>';Query OK, 8 rows affected (0.00 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0
Here, we assume that person.xml is located in
the MySQL data directory. If the file cannot be found, the
following error results:
ERROR 2 (HY000): File '/person.xml' not found (Errcode: 2)
The ROWS IDENTIFIED BY '<person>' clause
means that each <person> element in the
XML file is considered equivalent to a row in the table into which
the data is to be imported. In this case, this is the
person table in the test
database.
As can be seen by the response from the server, 8 rows were
imported into the test.person table. This can
be verified by a simple SELECT
statement:
mysql> SELECT * FROM person;
+-----------+--------+------------+---------------------+
| person_id | fname | lname | created |
+-----------+--------+------------+---------------------+
| 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 |
| 2 | Sajon | Rondela | 2007-07-13 16:18:47 |
| 3 | Likame | Örrtmons | 2007-07-13 16:18:47 |
| 4 | Slar | Manlanth | 2007-07-13 16:18:47 |
| 5 | Stoma | Nilu | 2007-07-13 16:18:47 |
| 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 |
| 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 |
| 8 | Sreraf | Encmelt | 2007-07-13 16:18:47 |
+-----------+--------+------------+---------------------+
8 rows in set (0.00 sec)
This shows, as stated earlier in this section, that any or all of
the 3 permitted XML formats may appear in a single file and be
read using LOAD XML.
The inverse of the import operation just shown—that is, dumping MySQL table data into an XML file—can be accomplished using the mysql client from the system shell, as shown here:
shell>mysql --xml -e "SELECT * FROM test.person" > person-dump.xmlshell>cat person-dump.xml<?xml version="1.0"?> <resultset statement="SELECT * FROM test.person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <row> <field name="person_id">1</field> <field name="fname">Kapek</field> <field name="lname">Sainnouine</field> </row> <row> <field name="person_id">2</field> <field name="fname">Sajon</field> <field name="lname">Rondela</field> </row> <row> <field name="person_id">3</field> <field name="fname">Likema</field> <field name="lname">Örrtmons</field> </row> <row> <field name="person_id">4</field> <field name="fname">Slar</field> <field name="lname">Manlanth</field> </row> <row> <field name="person_id">5</field> <field name="fname">Stoma</field> <field name="lname">Nilu</field> </row> <row> <field name="person_id">6</field> <field name="fname">Nirtam</field> <field name="lname">Sklöd</field> </row> <row> <field name="person_id">7</field> <field name="fname">Sungam</field> <field name="lname">Dulbåd</field> </row> <row> <field name="person_id">8</field> <field name="fname">Sreraf</field> <field name="lname">Encmelt</field> </row> </resultset>
The --xml option causes the
mysql client to use XML formatting for its
output; the -e
option causes the client to execute the SQL statement
immediately following the option. See Section 4.5.1, “mysql — The MySQL Command-Line Client”.
You can verify that the dump is valid by creating a copy of the
person table and importing the dump file into
the new table, like this:
mysql>USE test;mysql>CREATE TABLE person2 LIKE person;Query OK, 0 rows affected (0.00 sec) mysql>LOAD XML LOCAL INFILE 'person-dump.xml'->INTO TABLE person2;Query OK, 8 rows affected (0.01 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM person2;+-----------+--------+------------+---------------------+ | person_id | fname | lname | created | +-----------+--------+------------+---------------------+ | 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 | | 2 | Sajon | Rondela | 2007-07-13 16:18:47 | | 3 | Likema | Örrtmons | 2007-07-13 16:18:47 | | 4 | Slar | Manlanth | 2007-07-13 16:18:47 | | 5 | Stoma | Nilu | 2007-07-13 16:18:47 | | 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 | | 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 | | 8 | Sreraf | Encmelt | 2007-07-13 16:18:47 | +-----------+--------+------------+---------------------+ 8 rows in set (0.00 sec)
There is no requirement that every field in the XML file be
matched with a column in the corresponding table. Fields which
have no corresponding columns are skipped. You can see this by
first emptying the person2 table and dropping
the created column, then using the same
LOAD XML statement we just employed
previously, like this:
mysql>TRUNCATE person2;Query OK, 8 rows affected (0.26 sec) mysql>ALTER TABLE person2 DROP COLUMN created;Query OK, 0 rows affected (0.52 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>SHOW CREATE TABLE person2\G*************************** 1. row *************************** Table: person2 Create Table: CREATE TABLE `person2` ( `person_id` int(11) NOT NULL, `fname` varchar(40) DEFAULT NULL, `lname` varchar(40) DEFAULT NULL, PRIMARY KEY (`person_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec) mysql>LOAD XML LOCAL INFILE 'person-dump.xml'->INTO TABLE person2;Query OK, 8 rows affected (0.01 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM person2;+-----------+--------+------------+ | person_id | fname | lname | +-----------+--------+------------+ | 1 | Kapek | Sainnouine | | 2 | Sajon | Rondela | | 3 | Likema | Örrtmons | | 4 | Slar | Manlanth | | 5 | Stoma | Nilu | | 6 | Nirtam | Sklöd | | 7 | Sungam | Dulbåd | | 8 | Sreraf | Encmelt | +-----------+--------+------------+ 8 rows in set (0.00 sec)
The order in which the fields are given within each row of the XML
file does not affect the operation of LOAD
XML; the field order can vary from row to row, and is
not required to be in the same order as the corresponding columns
in the table.
As mentioned previously, you can use a
( list of one or more XML fields (to select desired
fields only) or user variables (to store the corresponding field
values for later use). User variables can be especially useful
when you want to insert data from an XML file into table columns
whose names do not match those of the XML fields. To see how this
works, we first create a table named field_name_or_user_var,
...)individual
whose structure matches that of the person
table, but whose columns are named differently:
mysql>CREATE TABLE individual (->individual_id INT NOT NULL PRIMARY KEY,->name1 VARCHAR(40) NULL,->name2 VARCHAR(40) NULL,->made TIMESTAMP-> ); Query OK, 0 rows affected (0.42 sec)
In this case, you cannot simply load the XML file directly into the table, because the field and column names do not match:
mysql> LOAD XML INFILE '../bin/person-dump.xml' INTO TABLE test.individual;
ERROR 1263 (22004): Column set to default value; NULL supplied to NOT NULL column 'individual_id' at row 1
This happens because the MySQL server looks for field names
matching the column names of the target table. You can work around
this problem by selecting the field values into user variables,
then setting the target table's columns equal to the values
of those variables using SET. You can perform
both of these operations in a single statement, as shown here:
mysql>LOAD XML INFILE '../bin/person-dump.xml'->INTO TABLE test.individual (@person_id, @fname, @lname, @created)->SET individual_id=@person_id, name1=@fname, name2=@lname, made=@created;Query OK, 8 rows affected (0.05 sec) Records: 8 Deleted: 0 Skipped: 0 Warnings: 0 mysql>SELECT * FROM individual;+---------------+--------+------------+---------------------+ | individual_id | name1 | name2 | made | +---------------+--------+------------+---------------------+ | 1 | Kapek | Sainnouine | 2007-07-13 16:18:47 | | 2 | Sajon | Rondela | 2007-07-13 16:18:47 | | 3 | Likema | Örrtmons | 2007-07-13 16:18:47 | | 4 | Slar | Manlanth | 2007-07-13 16:18:47 | | 5 | Stoma | Nilu | 2007-07-13 16:18:47 | | 6 | Nirtam | Sklöd | 2007-07-13 16:18:47 | | 7 | Sungam | Dulbåd | 2007-07-13 16:18:47 | | 8 | Srraf | Encmelt | 2007-07-13 16:18:47 | +---------------+--------+------------+---------------------+ 8 rows in set (0.00 sec)
The names of the user variables must match
those of the corresponding fields from the XML file, with the
addition of the required @ prefix to indicate
that they are variables. The user variables need not be listed or
assigned in the same order as the corresponding fields.
Using a ROWS IDENTIFIED BY
'< clause, it
is possible to import data from the same XML file into database
tables with different definitions. For this example, suppose that
you have a file named tagname>'address.xml which
contains the following XML:
<?xml version="1.0"?>
<list>
<person person_id="1">
<fname>Robert</fname>
<lname>Jones</lname>
<address address_id="1" street="Mill Creek Road" zip="45365" city="Sidney"/>
<address address_id="2" street="Main Street" zip="28681" city="Taylorsville"/>
</person>
<person person_id="2">
<fname>Mary</fname>
<lname>Smith</lname>
<address address_id="3" street="River Road" zip="80239" city="Denver"/>
<!-- <address address_id="4" street="North Street" zip="37920" city="Knoxville"/> -->
</person>
</list>
You can again use the test.person table as
defined previously in this section, after clearing all the
existing records from the table and then showing its structure as
shown here:
mysql<TRUNCATE person;Query OK, 0 rows affected (0.04 sec) mysql<SHOW CREATE TABLE person\G*************************** 1. row *************************** Table: person Create Table: CREATE TABLE `person` ( `person_id` int(11) NOT NULL, `fname` varchar(40) DEFAULT NULL, `lname` varchar(40) DEFAULT NULL, `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`person_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 1 row in set (0.00 sec)
Now create an address table in the
test database using the following
CREATE TABLE statement:
CREATE TABLE address (
address_id INT NOT NULL PRIMARY KEY,
person_id INT NULL,
street VARCHAR(40) NULL,
zip INT NULL,
city VARCHAR(40) NULL,
created TIMESTAMP
);
To import the data from the XML file into the
person table, execute the following
LOAD XML statement, which specifies
that rows are to be specified by the
<person> element, as shown here;
mysql>LOAD XML LOCAL INFILE 'address.xml'->INTO TABLE person->ROWS IDENTIFIED BY '<person>';Query OK, 2 rows affected (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0
You can verify that the records were imported using a
SELECT statement:
mysql> SELECT * FROM person;
+-----------+--------+-------+---------------------+
| person_id | fname | lname | created |
+-----------+--------+-------+---------------------+
| 1 | Robert | Jones | 2007-07-24 17:37:06 |
| 2 | Mary | Smith | 2007-07-24 17:37:06 |
+-----------+--------+-------+---------------------+
2 rows in set (0.00 sec)
Since the <address> elements in the XML
file have no corresponding columns in the
person table, they are skipped.
To import the data from the <address>
elements into the address table, use the
LOAD XML statement shown here:
mysql>LOAD XML LOCAL INFILE 'address.xml'->INTO TABLE address->ROWS IDENTIFIED BY '<address>';Query OK, 3 rows affected (0.00 sec) Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
You can see that the data was imported using a
SELECT statement such as this one:
mysql> SELECT * FROM address;
+------------+-----------+-----------------+-------+--------------+---------------------+
| address_id | person_id | street | zip | city | created |
+------------+-----------+-----------------+-------+--------------+---------------------+
| 1 | 1 | Mill Creek Road | 45365 | Sidney | 2007-07-24 17:37:37 |
| 2 | 1 | Main Street | 28681 | Taylorsville | 2007-07-24 17:37:37 |
| 3 | 2 | River Road | 80239 | Denver | 2007-07-24 17:37:37 |
+------------+-----------+-----------------+-------+--------------+---------------------+
3 rows in set (0.00 sec)
The data from the <address> element that
is enclosed in XML comments is not imported. However, since there
is a person_id column in the
address table, the value of the
person_id attribute from the parent
<person> element for each
<address> is
imported into the address table.
Security Considerations.
As with the LOAD DATA statement,
the transfer of the XML file from the client host to the server
host is initiated by the MySQL server. In theory, a patched
server could be built that would tell the client program to
transfer a file of the server's choosing rather than the file
named by the client in the LOAD
XML statement. Such a server could access any file on
the client host to which the client user has read access.
In a Web environment, clients usually connect to MySQL from a Web
server. A user that can run any command against the MySQL server
can use LOAD XML
LOCAL to read any files to which the Web server process
has read access. In this environment, the client with respect to
the MySQL server is actually the Web server, not the remote
program being run by the user who connects to the Web server.
You can disable loading of XML files from clients by starting the
server with --local-infile=0 or
--local-infile=OFF. This option
can also be used when starting the mysql client
to disable LOAD XML for the
duration of the client session.
To prevent a client from loading XML files from the server, do not
grant the FILE privilege to the
corresponding MySQL user account, or revoke this privilege if the
client user account already has it.
Revoking the FILE privilege (or
not granting it in the first place) keeps the user only from
executing the LOAD XML statement
(as well as the LOAD_FILE()
function; it does not prevent the user from
executing LOAD XML
LOCAL. To disallow this statement, you must start the
server or the client with --local-infile=OFF.
In other words, the FILE
privilege affects only whether the client can read files on the
server; it has no bearing on whether the client can read files
on the local file system.
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[(col_name [, col_name] ...)]
{ {VALUES | VALUE} (value_list) [, (value_list)] ...
|
VALUES row_constructor_list
}
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
SET assignment_list
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[(col_name [, col_name] ...)]
{SELECT ... | TABLE table_name}
value:
{expr | DEFAULT}
value_list:
value [, value] ...
row_constructor_list:
ROW(value_list)[, ROW(value_list)][, ...]
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
REPLACE works exactly like
INSERT, except that if an old row
in the table has the same value as a new row for a
PRIMARY KEY or a UNIQUE
index, the old row is deleted before the new row is inserted. See
Section 13.2.6, “INSERT Statement”.
REPLACE is a MySQL extension to the
SQL standard. It either inserts, or deletes
and inserts. For another MySQL extension to standard
SQL—that either inserts or
updates—see
Section 13.2.6.2, “INSERT ... ON DUPLICATE KEY UPDATE Statement”.
DELAYED inserts and replaces were deprecated in
MySQL 5.6. In MySQL 8.0, DELAYED
is not supported. The server recognizes but ignores the
DELAYED keyword, handles the replace as a
nondelayed replace, and generates an
ER_WARN_LEGACY_SYNTAX_CONVERTED warning.
(“REPLACE DELAYED is no longer supported. The statement was
converted to REPLACE.”) The DELAYED
keyword will be removed in a future release.
REPLACE makes sense only if a
table has a PRIMARY KEY or
UNIQUE index. Otherwise, it becomes
equivalent to INSERT, because
there is no index to be used to determine whether a new row
duplicates another.
Values for all columns are taken from the values specified in the
REPLACE statement. Any missing
columns are set to their default values, just as happens for
INSERT. You cannot refer to values
from the current row and use them in the new row. If you use an
assignment such as SET
, the reference
to the column name on the right hand side is treated as
col_name =
col_name + 1DEFAULT(,
so the assignment is equivalent to col_name)SET
.
col_name =
DEFAULT(col_name) + 1
In MySQL 8.0.19 and later, you can specify the column values that
REPLACE attempts to insert using
VALUES ROW().
To use REPLACE, you must have both
the INSERT and
DELETE privileges for the table.
If a generated column is replaced explicitly, the only permitted
value is DEFAULT. For information about
generated columns, see
Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
REPLACE supports explicit partition selection
using the PARTITION keyword with a list of
comma-separated names of partitions, subpartitions, or both. As
with INSERT, if it is not possible
to insert the new row into any of these partitions or
subpartitions, the REPLACE statement fails with
the error Found a row not matching the given partition
set. For more information and examples, see
Section 23.5, “Partition Selection”.
The REPLACE statement returns a
count to indicate the number of rows affected. This is the sum of
the rows deleted and inserted. If the count is 1 for a single-row
REPLACE, a row was inserted and no
rows were deleted. If the count is greater than 1, one or more old
rows were deleted before the new row was inserted. It is possible
for a single row to replace more than one old row if the table
contains multiple unique indexes and the new row duplicates values
for different old rows in different unique indexes.
The affected-rows count makes it easy to determine whether
REPLACE only added a row or whether
it also replaced any rows: Check whether the count is 1 (added) or
greater (replaced).
If you are using the C API, the affected-rows count can be
obtained using the
mysql_affected_rows() function.
You cannot replace into a table and select from the same table in a subquery.
MySQL uses the following algorithm for
REPLACE (and
LOAD DATA ...
REPLACE):
Try to insert the new row into the table
While the insertion fails because a duplicate-key error occurs for a primary key or unique index:
Delete from the table the conflicting row that has the duplicate key value
Try again to insert the new row into the table
It is possible that in the case of a duplicate-key error, a
storage engine may perform the REPLACE as an
update rather than a delete plus insert, but the semantics are the
same. There are no user-visible effects other than a possible
difference in how the storage engine increments
Handler_ status
variables.
xxx
Because the results of REPLACE ... SELECT
statements depend on the ordering of rows from the
SELECT and this order cannot always
be guaranteed, it is possible when logging these statements for
the master and the slave to diverge. For this reason,
REPLACE ... SELECT statements are flagged as
unsafe for statement-based replication. such statements produce a
warning in the error log when using statement-based mode and are
written to the binary log using the row-based format when using
MIXED mode. See also
Section 17.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based
Replication”.
MySQL 8.0.19 and later supports
TABLE as well as
SELECT with
REPLACE, just as it does with
INSERT. See
Section 13.2.6.1, “INSERT ... SELECT Statement”, for more information and
examples.
When modifying an existing table that is not partitioned to
accommodate partitioning, or, when modifying the partitioning of
an already partitioned table, you may consider altering the
table's primary key (see
Section 23.6.1, “Partitioning Keys, Primary Keys, and Unique Keys”).
You should be aware that, if you do this, the results of
REPLACE statements may be affected, just as
they would be if you modified the primary key of a nonpartitioned
table. Consider the table created by the following
CREATE TABLE statement:
CREATE TABLE test ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, data VARCHAR(64) DEFAULT NULL, ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id) );
When we create this table and run the statements shown in the mysql client, the result is as follows:
mysql>REPLACE INTO test VALUES (1, 'Old', '2014-08-20 18:47:00');Query OK, 1 row affected (0.04 sec) mysql>REPLACE INTO test VALUES (1, 'New', '2014-08-20 18:47:42');Query OK, 2 rows affected (0.04 sec) mysql>SELECT * FROM test;+----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | New | 2014-08-20 18:47:42 | +----+------+---------------------+ 1 row in set (0.00 sec)
Now we create a second table almost identical to the first, except that the primary key now covers 2 columns, as shown here (emphasized text):
CREATE TABLE test2 (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
data VARCHAR(64) DEFAULT NULL,
ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id, ts)
);
When we run on test2 the same two
REPLACE statements as we did on the original
test table, we obtain a different result:
mysql>REPLACE INTO test2 VALUES (1, 'Old', '2014-08-20 18:47:00');Query OK, 1 row affected (0.05 sec) mysql>REPLACE INTO test2 VALUES (1, 'New', '2014-08-20 18:47:42');Query OK, 1 row affected (0.06 sec) mysql>SELECT * FROM test2;+----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | Old | 2014-08-20 18:47:00 | | 1 | New | 2014-08-20 18:47:42 | +----+------+---------------------+ 2 rows in set (0.00 sec)
This is due to the fact that, when run on
test2, both the id and
ts column values must match those of an
existing row for the row to be replaced; otherwise, a row is
inserted.
SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr] ...
[into_option]
[FROM table_references
[PARTITION partition_list]]
[WHERE where_condition]
[GROUP BY {col_name | expr | position}, ... [WITH ROLLUP]]
[HAVING where_condition]
[WINDOW window_name AS (window_spec)
[, window_name AS (window_spec)] ...]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[into_option]
[FOR {UPDATE | SHARE}
[OF tbl_name [, tbl_name] ...]
[NOWAIT | SKIP LOCKED]
| LOCK IN SHARE MODE]
[into_option]
into_option: {
INTO OUTFILE 'file_name'
[CHARACTER SET charset_name]
export_options
| INTO DUMPFILE 'file_name'
| INTO var_name [, var_name] ...
}
SELECT is used to retrieve rows
selected from one or more tables, and can include
UNION statements and subqueries.
See Section 13.2.10.3, “UNION Clause”, and Section 13.2.11, “Subqueries”. A
SELECT statement can start with a
WITH clause to define common table
expressions accessible within the
SELECT. See Section 13.2.15, “WITH (Common Table Expressions)”.
The most commonly used clauses of
SELECT statements are these:
Each select_expr indicates a column
that you want to retrieve. There must be at least one
select_expr.
table_references indicates the
table or tables from which to retrieve rows. Its syntax is
described in Section 13.2.10.2, “JOIN Clause”.
SELECT supports explicit partition
selection using the PARTITION with a list
of partitions or subpartitions (or both) following the name of
the table in a table_reference (see
Section 13.2.10.2, “JOIN Clause”). In this case, rows are selected only
from the partitions listed, and any other partitions of the
table are ignored. For more information and examples, see
Section 23.5, “Partition Selection”.
The WHERE clause, if given, indicates the
condition or conditions that rows must satisfy to be selected.
where_condition is an expression
that evaluates to true for each row to be selected. The
statement selects all rows if there is no
WHERE clause.
In the WHERE expression, you can use any of
the functions and operators that MySQL supports, except for
aggregate (summary) functions. See
Section 9.5, “Expressions”, and
Chapter 12, Functions and Operators.
SELECT can also be used to retrieve
rows computed without reference to any table.
For example:
mysql> SELECT 1 + 1;
-> 2
You are permitted to specify DUAL as a dummy
table name in situations where no tables are referenced:
mysql> SELECT 1 + 1 FROM DUAL;
-> 2
DUAL is purely for the convenience of people
who require that all SELECT
statements should have FROM and possibly other
clauses. MySQL may ignore the clauses. MySQL does not require
FROM DUAL if no tables are referenced.
In general, clauses used must be given in exactly the order shown
in the syntax description. For example, a
HAVING clause must come after any
GROUP BY clause and before any ORDER
BY clause. The INTO clause, if
present, can appear in any position indicated by the syntax
description, but within a given statement can appear only once,
not in multiple positions. For more information about
INTO, see Section 13.2.10.1, “SELECT ... INTO Statement”.
The list of select_expr terms comprises
the select list that indicates which columns to retrieve. Terms
specify a column or expression or can use
*-shorthand:
A select list consisting only of a single unqualified
* can be used as shorthand to select all
columns from all tables:
SELECT * FROM t1 INNER JOIN t2 ...
can
be used as a qualified shorthand to select all columns from
the named table:
tbl_name.*
SELECT t1.*, t2.* FROM t1 INNER JOIN t2 ...
Use of an unqualified * with other items in
the select list may produce a parse error. To avoid this
problem, use a qualified
reference
tbl_name.*
SELECT AVG(score), t1.* FROM t1 ...
The following list provides additional information about other
SELECT clauses:
A select_expr can be given an alias
using AS
. The alias is
used as the expression's column name and can be used in
alias_nameGROUP BY, ORDER BY, or
HAVING clauses. For example:
SELECT CONCAT(last_name,', ',first_name) AS full_name FROM mytable ORDER BY full_name;
The AS keyword is optional when aliasing a
select_expr with an identifier. The
preceding example could have been written like this:
SELECT CONCAT(last_name,', ',first_name) full_name FROM mytable ORDER BY full_name;
However, because the AS is optional, a
subtle problem can occur if you forget the comma between two
select_expr expressions: MySQL
interprets the second as an alias name. For example, in the
following statement, columnb is treated as
an alias name:
SELECT columna columnb FROM mytable;
For this reason, it is good practice to be in the habit of
using AS explicitly when specifying column
aliases.
It is not permissible to refer to a column alias in a
WHERE clause, because the column value
might not yet be determined when the WHERE
clause is executed. See Section B.4.4.4, “Problems with Column Aliases”.
The FROM
clause
indicates the table or tables from which to retrieve rows. If
you name more than one table, you are performing a join. For
information on join syntax, see Section 13.2.10.2, “JOIN Clause”. For
each table specified, you can optionally specify an alias.
table_references
tbl_name[[AS]alias] [index_hint]
The use of index hints provides the optimizer with information about how to choose indexes during query processing. For a description of the syntax for specifying these hints, see Section 8.9.4, “Index Hints”.
You can use SET
max_seeks_for_key=
as an alternative way to force MySQL to prefer key scans
instead of table scans. See
Section 5.1.8, “Server System Variables”.
value
You can refer to a table within the default database as
tbl_name, or as
db_name.tbl_name
to specify a database explicitly. You can refer to a column as
col_name,
tbl_name.col_name,
or
db_name.tbl_name.col_name.
You need not specify a tbl_name or
db_name.tbl_name
prefix for a column reference unless the reference would be
ambiguous. See Section 9.2.2, “Identifier Qualifiers”, for
examples of ambiguity that require the more explicit column
reference forms.
A table reference can be aliased using
or
tbl_name AS
alias_nametbl_name alias_name. These
statements are equivalent:
SELECT t1.name, t2.salary FROM employee AS t1, info AS t2 WHERE t1.name = t2.name; SELECT t1.name, t2.salary FROM employee t1, info t2 WHERE t1.name = t2.name;
Columns selected for output can be referred to in
ORDER BY and GROUP BY
clauses using column names, column aliases, or column
positions. Column positions are integers and begin with 1:
SELECT college, region, seed FROM tournament ORDER BY region, seed; SELECT college, region AS r, seed AS s FROM tournament ORDER BY r, s; SELECT college, region, seed FROM tournament ORDER BY 2, 3;
To sort in reverse order, add the DESC
(descending) keyword to the name of the column in the
ORDER BY clause that you are sorting by.
The default is ascending order; this can be specified
explicitly using the ASC keyword.
If ORDER BY occurs within a parenthesized
query block and also is applied in the outer query, the
outermost ORDER BY takes precedence. For
example, results for the following statement are sorted in
descending order, not ascending order:
(SELECT ... ORDER BY a) ORDER BY a DESC;
Use of column positions is deprecated because the syntax has been removed from the SQL standard.
Prior to MySQL 8.0.13, MySQL supported a nonstandard syntax
extension that permitted explicit ASC or
DESC designators for GROUP
BY columns. MySQL 8.0.12 and later supports
ORDER BY with grouping functions so that
use of this extension is no longer necessary. (Bug #86312, Bug
#26073525) This also means you can sort on an arbitrary column
or columns when using GROUP BY, like this:
SELECT a, b, COUNT(c) AS t FROM test_table GROUP BY a,b ORDER BY a,t DESC;
As of MySQL 8.0.13, the GROUP BY extension
is no longer supported: ASC or
DESC designators for GROUP
BY columns are not permitted.
When you use ORDER BY or GROUP
BY to sort a column in a
SELECT, the server sorts values
using only the initial number of bytes indicated by the
max_sort_length system
variable.
MySQL extends the use of GROUP BY to permit
selecting fields that are not mentioned in the GROUP
BY clause. If you are not getting the results that
you expect from your query, please read the description of
GROUP BY found in
Section 12.20, “Aggregate (GROUP BY) Functions”.
GROUP BY permits a WITH
ROLLUP modifier. See
Section 12.20.2, “GROUP BY Modifiers”.
Previously, it was not permitted to use ORDER
BY in a query having a WITH
ROLLUP modifier. This restriction is lifted as of
MySQL 8.0.12. See Section 12.20.2, “GROUP BY Modifiers”.
The HAVING clause is applied nearly last,
just before items are sent to the client, with no
optimization. (LIMIT is applied after
HAVING.)
The SQL standard requires that HAVING must
reference only columns in the GROUP BY
clause or columns used in aggregate functions. However, MySQL
supports an extension to this behavior, and permits
HAVING to refer to columns in the
SELECT list and columns in
outer subqueries as well.
If the HAVING clause refers to a column
that is ambiguous, a warning occurs. In the following
statement, col2 is ambiguous because it is
used as both an alias and a column name:
SELECT COUNT(col1) AS col2 FROM t GROUP BY col2 HAVING col2 = 2;
Preference is given to standard SQL behavior, so if a
HAVING column name is used both in
GROUP BY and as an aliased column in the
output column list, preference is given to the column in the
GROUP BY column.
Do not use HAVING for items that should be
in the WHERE clause. For example, do not
write the following:
SELECTcol_nameFROMtbl_nameHAVINGcol_name> 0;
Write this instead:
SELECTcol_nameFROMtbl_nameWHEREcol_name> 0;
The HAVING clause can refer to aggregate
functions, which the WHERE clause cannot:
SELECT user, MAX(salary) FROM users GROUP BY user HAVING MAX(salary) > 10;
(This did not work in some older versions of MySQL.)
MySQL permits duplicate column names. That is, there can be
more than one select_expr with the
same name. This is an extension to standard SQL. Because MySQL
also permits GROUP BY and
HAVING to refer to
select_expr values, this can result
in an ambiguity:
SELECT 12 AS a, a FROM t GROUP BY a;
In that statement, both columns have the name
a. To ensure that the correct column is
used for grouping, use different names for each
select_expr.
The WINDOW clause, if present, defines
named windows that can be referred to by window functions. For
details, see Section 12.21.4, “Named Windows”.
MySQL resolves unqualified column or alias references in
ORDER BY clauses by searching in the
select_expr values, then in the
columns of the tables in the FROM clause.
For GROUP BY or HAVING
clauses, it searches the FROM clause before
searching in the select_expr
values. (For GROUP BY and
HAVING, this differs from the pre-MySQL 5.0
behavior that used the same rules as for ORDER
BY.)
The LIMIT clause can be used to constrain
the number of rows returned by the
SELECT statement.
LIMIT takes one or two numeric arguments,
which must both be nonnegative integer constants, with these
exceptions:
Within prepared statements, LIMIT
parameters can be specified using ?
placeholder markers.
Within stored programs, LIMIT
parameters can be specified using integer-valued routine
parameters or local variables.
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT
is equivalent
to row_countLIMIT 0,
.
row_count
For prepared statements, you can use placeholders. The
following statements will return one row from the
tbl table:
SET @a=1; PREPARE STMT FROM 'SELECT * FROM tbl LIMIT ?'; EXECUTE STMT USING @a;
The following statements will return the second to sixth row
from the tbl table:
SET @skip=1; SET @numrows=5; PREPARE STMT FROM 'SELECT * FROM tbl LIMIT ?, ?'; EXECUTE STMT USING @skip, @numrows;
For compatibility with PostgreSQL, MySQL also supports the
LIMIT syntax.
row_count OFFSET
offset
If LIMIT occurs within a parenthesized
query block and also is applied in the outer query, the
outermost LIMIT takes precedence. For
example, the following statement produces two rows, not one:
(SELECT ... LIMIT 1) LIMIT 2;
The SELECT ...
INTO form of SELECT
enables the query result to be written to a file or stored in
variables. For more information, see
Section 13.2.10.1, “SELECT ... INTO Statement”.
If you use FOR UPDATE with a storage engine
that uses page or row locks, rows examined by the query are
write-locked until the end of the current transaction.
You cannot use FOR UPDATE as part of the
SELECT in a statement such as
CREATE
TABLE . (If you
attempt to do so, the statement is rejected with the error
Can't update table
'new_table SELECT ... FROM
old_table ...old_table' while
'new_table' is being
created.)
FOR SHARE and LOCK IN SHARE
MODE set shared locks that permit other transactions
to read the examined rows but not to update or delete them.
FOR SHARE and LOCK IN SHARE
MODE are equivalent. However, FOR
SHARE, like FOR UPDATE, supports
NOWAIT, SKIP LOCKED, and
OF
options. tbl_nameFOR SHARE is a replacement for
LOCK IN SHARE MODE, but LOCK IN
SHARE MODE remains available for backward
compatibility.
NOWAIT causes a FOR
UPDATE or FOR SHARE query to
execute immediately, returning an error if a row lock cannot
be obtained due to a lock held by another transaction.
SKIP LOCKED causes a FOR
UPDATE or FOR SHARE query to
execute immediately, excluding rows from the result set that
are locked by another transaction.
NOWAIT and SKIP LOCKED
options are unsafe for statement-based replication.
Queries that skip locked rows return an inconsistent view of
the data. SKIP LOCKED is therefore not
suitable for general transactional work. However, it may be
used to avoid lock contention when multiple sessions access
the same queue-like table.
OF
applies tbl_nameFOR UPDATE and FOR
SHARE queries to named tables. For example:
SELECT * FROM t1, t2 FOR SHARE OF t1 FOR UPDATE OF t2;
All tables referenced by the query block are locked when
OF is
omitted. Consequently, using a locking clause without
tbl_nameOF in
combination with another locking clause returns an error.
Specifying the same table in multiple locking clauses returns
an error. If an alias is specified as the table name in the
tbl_nameSELECT statement, a locking clause may only
use the alias. If the SELECT statement does
not specify an alias explicitly, the locking clause may only
specify the actual table name.
For more information about FOR UPDATE and
FOR SHARE, see
Section 15.7.2.4, “Locking Reads”. For additional
information about NOWAIT and SKIP
LOCKED options, see
Locking Read Concurrency with NOWAIT and SKIP LOCKED.
Following the SELECT keyword, you
can use a number of modifiers that affect the operation of the
statement. HIGH_PRIORITY,
STRAIGHT_JOIN, and modifiers beginning with
SQL_ are MySQL extensions to standard SQL.
The ALL and DISTINCT
modifiers specify whether duplicate rows should be returned.
ALL (the default) specifies that all
matching rows should be returned, including duplicates.
DISTINCT specifies removal of duplicate
rows from the result set. It is an error to specify both
modifiers. DISTINCTROW is a synonym for
DISTINCT.
In MySQL 8.0.12 and later, DISTINCT can be
used with a query that also uses WITH
ROLLUP. (Bug #87450, Bug #26640100)
HIGH_PRIORITY gives the
SELECT higher priority than a
statement that updates a table. You should use this only for
queries that are very fast and must be done at once. A
SELECT HIGH_PRIORITY query that is issued
while the table is locked for reading runs even if there is an
update statement waiting for the table to be free. This
affects only storage engines that use only table-level locking
(such as MyISAM, MEMORY,
and MERGE).
HIGH_PRIORITY cannot be used with
SELECT statements that are part
of a UNION.
STRAIGHT_JOIN forces the optimizer to join
the tables in the order in which they are listed in the
FROM clause. You can use this to speed up a
query if the optimizer joins the tables in nonoptimal order.
STRAIGHT_JOIN also can be used in the
table_references list. See
Section 13.2.10.2, “JOIN Clause”.
STRAIGHT_JOIN does not apply to any table
that the optimizer treats as a
const or
system table. Such a table
produces a single row, is read during the optimization phase
of query execution, and references to its columns are replaced
with the appropriate column values before query execution
proceeds. These tables will appear first in the query plan
displayed by EXPLAIN. See
Section 8.8.1, “Optimizing Queries with EXPLAIN”. This exception may not apply
to const or
system tables that are used
on the NULL-complemented side of an outer
join (that is, the right-side table of a LEFT
JOIN or the left-side table of a RIGHT
JOIN.
SQL_BIG_RESULT or
SQL_SMALL_RESULT can be used with
GROUP BY or DISTINCT to
tell the optimizer that the result set has many rows or is
small, respectively. For SQL_BIG_RESULT,
MySQL directly uses disk-based temporary tables if they are
created, and prefers sorting to using a temporary table with a
key on the GROUP BY elements. For
SQL_SMALL_RESULT, MySQL uses in-memory
temporary tables to store the resulting table instead of using
sorting. This should not normally be needed.
SQL_BUFFER_RESULT forces the result to be
put into a temporary table. This helps MySQL free the table
locks early and helps in cases where it takes a long time to
send the result set to the client. This modifier can be used
only for top-level SELECT
statements, not for subqueries or following
UNION.
SQL_CALC_FOUND_ROWS tells MySQL to
calculate how many rows there would be in the result set,
disregarding any LIMIT clause. The number
of rows can then be retrieved with SELECT
FOUND_ROWS(). See
Section 12.15, “Information Functions”.
The SQL_CALC_FOUND_ROWS query modifier
and accompanying FOUND_ROWS()
function are deprecated as of MySQL 8.0.17 and will be
removed in a future MySQL version. See the
FOUND_ROWS() description for
information about an alternative strategy.
The SQL_CACHE and
SQL_NO_CACHE modifiers were used with the
query cache prior to MySQL 8.0. The query cache
was removed in MySQL 8.0. The
SQL_CACHE modifier was removed as well.
SQL_NO_CACHE is deprecated, has no effect,
and will be removed in a future MySQL release.
The SELECT ...
INTO form of SELECT
enables a query result to be stored in variables or written to a
file:
SELECT ... INTO
selects column
values and stores them into variables.
var_list
SELECT ... INTO OUTFILE writes the
selected rows to a file. Column and line terminators can be
specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single
row to a file without any formatting.
A given SELECT statement can
contain at most one INTO clause, although as
shown by the SELECT syntax
description (see Section 13.2.10, “SELECT Statement”), the
INTO can appear in different positions:
Before FROM. Example:
SELECT * INTO @myvar FROM t1;
Before a trailing locking clause. Example:
SELECT * FROM t1 INTO @myvar FOR UPDATE;
At the end of the SELECT.
Example:
SELECT * FROM t1 FOR UPDATE INTO @myvar;
The INTO position at the end of the statement
is supported as of MySQL 8.0.20, and is the preferred position.
The position before a locking clause is deprecated as of MySQL
8.0.20 and support for it will be removed in a future MySQL
version. In other words, INTO after
FROM but not at the end of the
SELECT produces a warning.
An INTO clause should not be used in a nested
SELECT because such a
SELECT must return its result to
the outer context. There are also constraints on the use of
INTO within
UNION statements; see
Section 13.2.10.3, “UNION Clause”.
For the INTO
variant:
var_list
var_list names a list of one or
more variables, each of which can be a user-defined
variable, stored procedure or function parameter, or stored
program local variable. (Within a prepared SELECT
... INTO
statement, only user-defined variables are permitted; see
Section 13.6.4.2, “Local Variable Scope and Resolution”.)
var_list
The selected values are assigned to the variables. The
number of variables must match the number of columns. The
query should return a single row. If the query returns no
rows, a warning with error code 1329 occurs (No
data), and the variable values remain unchanged.
If the query returns multiple rows, error 1172 occurs
(Result consisted of more than one row).
If it is possible that the statement may retrieve multiple
rows, you can use LIMIT 1 to limit the
result set to a single row.
SELECT id, data INTO @x, @y FROM test.t1 LIMIT 1;
INTO can
also be used with a var_listTABLE
statement, subject to these restrictions:
The number of variables must match the number of columns in the table.
If the table contains more than one row, you must use
LIMIT 1 to limit the result set to a
single row. LIMIT 1 must precede the
INTO keyword.
An example of such a statement is shown here:
TABLE employees ORDER BY lname DESC LIMIT 1
INTO @id, @fname, @lname, @hired, @separated, @job_code, @store_id;
You can also select values from a
VALUES statement that generates a
single row into a set of user variables. In this case, you must
employ a table alias, and you must assign each value from the
value list to a variable. Each of the two statements shown here
is equivalent to
SET @x=2, @y=4,
@z=8:
SELECT * FROM (VALUES ROW(2,4,8)) AS t INTO @x,@y,@z; SELECT * FROM (VALUES ROW(2,4,8)) AS t(a,b,c) INTO @x,@y,@z;
User variable names are not case-sensitive. See Section 9.4, “User-Defined Variables”.
The SELECT ... INTO
OUTFILE ' form of
file_name'SELECT writes the selected rows
to a file. The file is created on the server host, so you must
have the FILE privilege to use
this syntax. file_name cannot be an
existing file, which among other things prevents files such as
/etc/passwd and database tables from being
modified. The
character_set_filesystem system
variable controls the interpretation of the file name.
The SELECT ... INTO
OUTFILE statement is intended to enable dumping a
table to a text file on the server host. To create the resulting
file on some other host,
SELECT ... INTO
OUTFILE normally is unsuitable because there is no way
to write a path to the file relative to the server host file
system, unless the location of the file on the remote host can
be accessed using a network-mapped path on the server host file
system.
Alternatively, if the MySQL client software is installed on the
remote host, you can use a client command such as mysql
-e "SELECT ..." >
to generate the
file on that host.
file_name
SELECT ... INTO
OUTFILE is the complement of LOAD
DATA. Column values are written converted to the
character set specified in the CHARACTER SET
clause. If no such clause is present, values are dumped using
the binary character set. In effect, there is
no character set conversion. If a result set contains columns in
several character sets, so will the output data file and it may
not be possible to reload the file correctly.
The syntax for the export_options
part of the statement consists of the same
FIELDS and LINES clauses
that are used with the LOAD DATA
statement. For information about the FIELDS
and LINES clauses, including their default
values and permissible values, see Section 13.2.7, “LOAD DATA Statement”.
FIELDS ESCAPED BY controls how to write
special characters. If the FIELDS ESCAPED BY
character is not empty, it is used when necessary to avoid
ambiguity as a prefix that precedes following characters on
output:
The FIELDS ESCAPED BY character
The FIELDS [OPTIONALLY] ENCLOSED BY
character
The first character of the FIELDS TERMINATED
BY and LINES TERMINATED BY
values
ASCII NUL (the zero-valued byte; what is
actually written following the escape character is ASCII
0, not a zero-valued byte)
The FIELDS TERMINATED BY, ENCLOSED
BY, ESCAPED BY, or LINES
TERMINATED BY characters must be
escaped so that you can read the file back in reliably. ASCII
NUL is escaped to make it easier to view with
some pagers.
The resulting file need not conform to SQL syntax, so nothing else need be escaped.
If the FIELDS ESCAPED BY character is empty,
no characters are escaped and NULL is output
as NULL, not \N. It is
probably not a good idea to specify an empty escape character,
particularly if field values in your data contain any of the
characters in the list just given.
INTO OUTFILE can also be used with a
TABLE statement when you want to
dump all columns of a table into a text file. In this case, the
ordering and number of rows can be controlled using
ORDER BY and LIMIT; these
clauses must precede INTO OUTFILE.
TABLE ... INTO OUTFILE supports the same
export_options as does
SELECT ... INTO OUTFILE, and it is subject to
the same restrictions on writing to the file system. An example
of such a statement is shown here:
TABLE employees ORDER BY lname LIMIT 1000
INTO OUTFILE '/tmp/employee_data_1.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"', ESCAPED BY '\'
LINES TERMINATED BY '\n';
You can also use SELECT ... INTO OUTFILE with
a VALUES statement to write
values directly into a file. An example is shown here:
SELECT * FROM (VALUES ROW(1,2,3),ROW(4,5,6),ROW(7,8,9)) AS t
INTO OUTFILE '/tmp/select-values.txt';
You must use a table alias; column aliases are also supported,
and can optionally be used to write values only from desired
columns. You can also use any or all of the export options
supported by SELECT ... INTO OUTFILE to
format the output to the file.
Here is an example that produces a file in the comma-separated values (CSV) format used by many programs:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM test_table;
If you use INTO DUMPFILE instead of
INTO OUTFILE, MySQL writes only one row into
the file, without any column or line termination and without
performing any escape processing. This is useful for selecting a
BLOB value and storing it in a
file.
TABLE also supports INTO
DUMPFILE. If the table contains more than one row, you
must also use LIMIT 1 to limit the output to
a single row. INTO DUMPFILE can also be used
with SELECT * FROM (VALUES ROW()[, ...]) AS
. See
Section 13.2.14, “VALUES Statement”.
table_alias [LIMIT 1]
Any file created by INTO OUTFILE or
INTO DUMPFILE is owned by the operating
system user under whose account mysqld
runs. (You should never run
mysqld as root for this
and other reasons.) As of MySQL 8.0.17, the umask for file
creation is 0640; you must have sufficient access privileges
to manipulate the file contents. Prior to MySQL 8.0.17, the
umask is 0666 and the file is writable by all users on the
server host.
If the secure_file_priv
system variable is set to a nonempty directory name, the file
to be written must be located in that directory.
In the context of
SELECT ...
INTO statements that occur as part of events executed
by the Event Scheduler, diagnostics messages (not only errors,
but also warnings) are written to the error log, and, on
Windows, to the application event log. For additional
information, see Section 24.4.5, “Event Scheduler Status”.
MySQL supports the following JOIN syntax for
the table_references part of
SELECT statements and
multiple-table DELETE and
UPDATE statements:
table_references:escaped_table_reference[,escaped_table_reference] ...escaped_table_reference: {table_reference| { OJtable_reference} }table_reference: {table_factor|joined_table}table_factor: {tbl_name[PARTITION (partition_names)] [[AS]alias] [index_hint_list] | [LATERAL]table_subquery[AS]alias[(col_list)] | (table_references) }joined_table: {table_reference{[INNER | CROSS] JOIN | STRAIGHT_JOIN}table_factor[join_specification] |table_reference{LEFT|RIGHT} [OUTER] JOINtable_referencejoin_specification|table_referenceNATURAL [INNER | {LEFT|RIGHT} [OUTER]] JOINtable_factor}join_specification: { ONsearch_condition| USING (join_column_list) }join_column_list:column_name[,column_name] ...index_hint_list:index_hint[,index_hint] ...index_hint: { USE {INDEX|KEY} [FOR {JOIN|ORDER BY|GROUP BY}] ([index_list]) | {IGNORE|FORCE} {INDEX|KEY} [FOR {JOIN|ORDER BY|GROUP BY}] (index_list) }index_list:index_name[,index_name] ...
A table reference is also known as a join expression.
A table reference (when it refers to a partitioned table) may
contain a PARTITION option, including a list
of comma-separated partitions, subpartitions, or both. This
option follows the name of the table and precedes any alias
declaration. The effect of this option is that rows are selected
only from the listed partitions or subpartitions. Any partitions
or subpartitions not named in the list are ignored. For more
information and examples, see
Section 23.5, “Partition Selection”.
The syntax of table_factor is
extended in MySQL in comparison with standard SQL. The standard
accepts only table_reference, not a
list of them inside a pair of parentheses.
This is a conservative extension if each comma in a list of
table_reference items is considered
as equivalent to an inner join. For example:
SELECT * FROM t1 LEFT JOIN (t2, t3, t4)
ON (t2.a = t1.a AND t3.b = t1.b AND t4.c = t1.c)is equivalent to:
SELECT * FROM t1 LEFT JOIN (t2 CROSS JOIN t3 CROSS JOIN t4)
ON (t2.a = t1.a AND t3.b = t1.b AND t4.c = t1.c)
In MySQL, JOIN, CROSS
JOIN, and INNER JOIN are syntactic
equivalents (they can replace each other). In standard SQL, they
are not equivalent. INNER JOIN is used with
an ON clause, CROSS JOIN
is used otherwise.
In general, parentheses can be ignored in join expressions containing only inner join operations. MySQL also supports nested joins. See Section 8.2.1.8, “Nested Join Optimization”.
Index hints can be specified to affect how the MySQL optimizer
makes use of indexes. For more information, see
Section 8.9.4, “Index Hints”. Optimizer hints and the
optimizer_switch system variable are other
ways to influence optimizer use of indexes. See
Section 8.9.3, “Optimizer Hints”, and
Section 8.9.2, “Switchable Optimizations”.
The following list describes general factors to take into account when writing joins:
A table reference can be aliased using
or
tbl_name AS
alias_nametbl_name alias_name:
SELECT t1.name, t2.salary FROM employee AS t1 INNER JOIN info AS t2 ON t1.name = t2.name; SELECT t1.name, t2.salary FROM employee t1 INNER JOIN info t2 ON t1.name = t2.name;
A table_subquery is also known as
a derived table or subquery in the FROM
clause. See Section 13.2.11.8, “Derived Tables”. Such
subqueries must include an alias to
give the subquery result a table name, and may optionally
include a list of table column names in parentheses. A
trivial example follows:
SELECT * FROM (SELECT 1, 2, 3) AS t1;
The maximum number of tables that can be referenced in a
single join is 61. This includes a join handled by merging
derived tables and views in the FROM
clause into the outer query block (see
Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions
with Merging or Materialization”).
INNER JOIN and ,
(comma) are semantically equivalent in the absence of a join
condition: both produce a Cartesian product between the
specified tables (that is, each and every row in the first
table is joined to each and every row in the second table).
However, the precedence of the comma operator is less than
that of INNER JOIN, CROSS
JOIN, LEFT JOIN, and so on. If
you mix comma joins with the other join types when there is
a join condition, an error of the form Unknown
column ' may occur. Information about dealing with
this problem is given later in this section.
col_name' in 'on
clause'
The search_condition used with
ON is any conditional expression of the
form that can be used in a WHERE clause.
Generally, the ON clause serves for
conditions that specify how to join tables, and the
WHERE clause restricts which rows to
include in the result set.
If there is no matching row for the right table in the
ON or USING part in a
LEFT JOIN, a row with all columns set to
NULL is used for the right table. You can
use this fact to find rows in a table that have no
counterpart in another table:
SELECT left_tbl.* FROM left_tbl LEFT JOIN right_tbl ON left_tbl.id = right_tbl.id WHERE right_tbl.id IS NULL;
This example finds all rows in left_tbl
with an id value that is not present in
right_tbl (that is, all rows in
left_tbl with no corresponding row in
right_tbl). See
Section 8.2.1.9, “Outer Join Optimization”.
The
USING(
clause names a list of columns that must exist in both
tables. If tables join_column_list)a and
b both contain columns
c1, c2, and
c3, the following join compares
corresponding columns from the two tables:
a LEFT JOIN b USING (c1, c2, c3)
The NATURAL [LEFT] JOIN of two tables is
defined to be semantically equivalent to an INNER
JOIN or a LEFT JOIN with a
USING clause that names all columns that
exist in both tables.
RIGHT JOIN works analogously to
LEFT JOIN. To keep code portable across
databases, it is recommended that you use LEFT
JOIN instead of RIGHT JOIN.
The { OJ ... } syntax shown in the join
syntax description exists only for compatibility with ODBC.
The curly braces in the syntax should be written literally;
they are not metasyntax as used elsewhere in syntax
descriptions.
SELECT left_tbl.*
FROM { OJ left_tbl LEFT OUTER JOIN right_tbl
ON left_tbl.id = right_tbl.id }
WHERE right_tbl.id IS NULL;
You can use other types of joins within { OJ ...
}, such as INNER JOIN or
RIGHT OUTER JOIN. This helps with
compatibility with some third-party applications, but is not
official ODBC syntax.
STRAIGHT_JOIN is similar to
JOIN, except that the left table is
always read before the right table. This can be used for
those (few) cases for which the join optimizer processes the
tables in a suboptimal order.
Some join examples:
SELECT * FROM table1, table2; SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id; SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id; SELECT * FROM table1 LEFT JOIN table2 USING (id); SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id LEFT JOIN table3 ON table2.id = table3.id;
Natural joins and joins with USING, including
outer join variants, are processed according to the SQL:2003
standard:
Redundant columns of a NATURAL join do
not appear. Consider this set of statements:
CREATE TABLE t1 (i INT, j INT); CREATE TABLE t2 (k INT, j INT); INSERT INTO t1 VALUES(1, 1); INSERT INTO t2 VALUES(1, 1); SELECT * FROM t1 NATURAL JOIN t2; SELECT * FROM t1 JOIN t2 USING (j);
In the first SELECT
statement, column j appears in both
tables and thus becomes a join column, so, according to
standard SQL, it should appear only once in the output, not
twice. Similarly, in the second SELECT statement, column
j is named in the
USING clause and should appear only once
in the output, not twice.
Thus, the statements produce this output:
+------+------+------+ | j | i | k | +------+------+------+ | 1 | 1 | 1 | +------+------+------+ +------+------+------+ | j | i | k | +------+------+------+ | 1 | 1 | 1 | +------+------+------+
Redundant column elimination and column ordering occurs according to standard SQL, producing this display order:
First, coalesced common columns of the two joined tables, in the order in which they occur in the first table
Second, columns unique to the first table, in order in which they occur in that table
Third, columns unique to the second table, in order in which they occur in that table
The single result column that replaces two common columns is
defined using the coalesce operation. That is, for two
t1.a and t2.a the
resulting single join column a is defined
as a = COALESCE(t1.a, t2.a), where:
COALESCE(x, y) = (CASE WHEN x IS NOT NULL THEN x ELSE y END)
If the join operation is any other join, the result columns of the join consist of the concatenation of all columns of the joined tables.
A consequence of the definition of coalesced columns is
that, for outer joins, the coalesced column contains the
value of the non-NULL column if one of
the two columns is always NULL. If
neither or both columns are NULL, both
common columns have the same value, so it doesn't matter
which one is chosen as the value of the coalesced column. A
simple way to interpret this is to consider that a coalesced
column of an outer join is represented by the common column
of the inner table of a JOIN. Suppose
that the tables t1(a, b) and
t2(a, c) have the following contents:
t1 t2 ---- ---- 1 x 2 z 2 y 3 w
Then, for this join, column a contains
the values of t1.a:
mysql> SELECT * FROM t1 NATURAL LEFT JOIN t2;
+------+------+------+
| a | b | c |
+------+------+------+
| 1 | x | NULL |
| 2 | y | z |
+------+------+------+
By contrast, for this join, column a
contains the values of t2.a.
mysql> SELECT * FROM t1 NATURAL RIGHT JOIN t2;
+------+------+------+
| a | c | b |
+------+------+------+
| 2 | z | y |
| 3 | w | NULL |
+------+------+------+
Compare those results to the otherwise equivalent queries
with JOIN ... ON:
mysql> SELECT * FROM t1 LEFT JOIN t2 ON (t1.a = t2.a);
+------+------+------+------+
| a | b | a | c |
+------+------+------+------+
| 1 | x | NULL | NULL |
| 2 | y | 2 | z |
+------+------+------+------+
mysql> SELECT * FROM t1 RIGHT JOIN t2 ON (t1.a = t2.a);
+------+------+------+------+
| a | b | a | c |
+------+------+------+------+
| 2 | y | 2 | z |
| NULL | NULL | 3 | w |
+------+------+------+------+
A USING clause can be rewritten as an
ON clause that compares corresponding
columns. However, although USING and
ON are similar, they are not quite the
same. Consider the following two queries:
a LEFT JOIN b USING (c1, c2, c3) a LEFT JOIN b ON a.c1 = b.c1 AND a.c2 = b.c2 AND a.c3 = b.c3
With respect to determining which rows satisfy the join condition, both joins are semantically identical.
With respect to determining which columns to display for
SELECT * expansion, the two joins are not
semantically identical. The USING join
selects the coalesced value of corresponding columns,
whereas the ON join selects all columns
from all tables. For the USING join,
SELECT * selects these values:
COALESCE(a.c1, b.c1), COALESCE(a.c2, b.c2), COALESCE(a.c3, b.c3)
For the ON join, SELECT
* selects these values:
a.c1, a.c2, a.c3, b.c1, b.c2, b.c3
With an inner join, COALESCE(a.c1,
b.c1) is the same as either
a.c1 or b.c1 because
both columns will have the same value. With an outer join
(such as LEFT JOIN), one of the two
columns can be NULL. That column is
omitted from the result.
An ON clause can refer only to its
operands.
Example:
CREATE TABLE t1 (i1 INT); CREATE TABLE t2 (i2 INT); CREATE TABLE t3 (i3 INT); SELECT * FROM t1 JOIN t2 ON (i1 = i3) JOIN t3;
The statement fails with an Unknown column 'i3' in
'on clause' error because i3 is
a column in t3, which is not an operand
of the ON clause. To enable the join to
be processed, rewrite the statement as follows:
SELECT * FROM t1 JOIN t2 JOIN t3 ON (i1 = i3);
JOIN has higher precedence than the comma
operator (,), so the join expression
t1, t2 JOIN t3 is interpreted as
(t1, (t2 JOIN t3)), not as ((t1,
t2) JOIN t3). This affects statements that use an
ON clause because that clause can refer
only to columns in the operands of the join, and the
precedence affects interpretation of what those operands
are.
Example:
CREATE TABLE t1 (i1 INT, j1 INT); CREATE TABLE t2 (i2 INT, j2 INT); CREATE TABLE t3 (i3 INT, j3 INT); INSERT INTO t1 VALUES(1, 1); INSERT INTO t2 VALUES(1, 1); INSERT INTO t3 VALUES(1, 1); SELECT * FROM t1, t2 JOIN t3 ON (t1.i1 = t3.i3);
The JOIN takes precedence over the comma
operator, so the operands for the ON
clause are t2 and t3.
Because t1.i1 is not a column in either
of the operands, the result is an Unknown column
't1.i1' in 'on clause' error.
To enable the join to be processed, use either of these strategies:
Group the first two tables explicitly with parentheses
so that the operands for the ON
clause are (t1, t2) and
t3:
SELECT * FROM (t1, t2) JOIN t3 ON (t1.i1 = t3.i3);
Avoid the use of the comma operator and use
JOIN instead:
SELECT * FROM t1 JOIN t2 JOIN t3 ON (t1.i1 = t3.i3);
The same precedence interpretation also applies to
statements that mix the comma operator with INNER
JOIN, CROSS JOIN, LEFT
JOIN, and RIGHT JOIN, all of
which have higher precedence than the comma operator.
A MySQL extension compared to the SQL:2003 standard is that
MySQL permits you to qualify the common (coalesced) columns
of NATURAL or USING
joins, whereas the standard disallows that.
SELECT ... UNION [ALL | DISTINCT] SELECT ... [UNION [ALL | DISTINCT] SELECT ...]
UNION combines the result from
multiple SELECT statements into a
single result set.
The result set column names are taken from the column names of
the first SELECT statement.
Selected columns listed in corresponding positions of each
SELECT statement should have the
same data type. For example, the first column selected by the
first statement should have the same type as the first column
selected by the other statements. If the data types of
corresponding SELECT columns do
not match, the types and lengths of the columns in the
UNION result take into account
the values retrieved by all the
SELECT statements. For example,
consider the following, where the column length is not
constrained to the length of the value from the first
SELECT:
mysql> SELECT REPEAT('a',1) UNION SELECT REPEAT('b',20);
+----------------------+
| REPEAT('a',1) |
+----------------------+
| a |
| bbbbbbbbbbbbbbbbbbbb |
+----------------------+
Beginning with MySQL 8.0.19, you can also use a
TABLE statement or
VALUES statement in a
UNION wherever you can employ the
equivalent SELECT statement.
Assume that tables t1 and
t2 are created and populated as shown here:
CREATE TABLE t1 (x INT, y INT); INSERT INTO t1 VALUES ROW(4,-2),ROW(5,9); CREATE TABLE t2 (a INT, b INT); INSERT INTO t2 VALUES ROW(1,2),ROW(3,4);
The preceding being the case, and disregarding the column names
in the output of the queries beginning with
VALUES, all of the following
UNION queries yield the same result:
SELECT * FROM t1 UNION SELECT * FROM t2; TABLE t1 UNION SELECT * FROM t2; VALUES ROW(4,-2), ROW(5,9) UNION SELECT * FROM t2; SELECT * FROM t1 UNION TABLE t2; TABLE t1 UNION TABLE t2; VALUES ROW(4,-2), ROW(5,9) UNION TABLE t2; SELECT * FROM t1 UNION VALUES ROW(4,-2),ROW(5,9); TABLE t1 UNION VALUES ROW(4,-2),ROW(5,9); VALUES ROW(4,-2), ROW(5,9) UNION VALUES ROW(4,-2),ROW(5,9);
To force the column names to be the same, wrap the
VALUES on the left hand side in a
SELECT and use aliases, like this:
SELECT * FROM (VALUES ROW(4,-2), ROW(5,9)) AS t(x,y) UNION TABLE t2; SELECT * FROM (VALUES ROW(4,-2), ROW(5,9)) AS t(x,y) UNION VALUES ROW(4,-2),ROW(5,9);
By default, duplicate rows are removed from
UNION results. The optional
DISTINCT keyword has the same effect but
makes it explicit. With the optional ALL
keyword, duplicate-row removal does not occur and the result
includes all matching rows from all the
SELECT statements.
You can mix UNION
ALL and UNION
DISTINCT in the same query. Mixed
UNION types are treated such that
a DISTINCT union overrides any
ALL union to its left. A
DISTINCT union can be produced explicitly by
using UNION
DISTINCT or implicitly by using
UNION with no following
DISTINCT or ALL keyword.
In MySQL 8.0.19 and later, UNION ALL and
UNION DISTINCT work the same way when one or
more TABLE statements are used in
the union.
To apply an ORDER BY or
LIMIT clause to an individual
SELECT, parenthesize the
SELECT and place the clause
inside the parentheses:
(SELECT a FROM t1 WHERE a=10 AND B=1 ORDER BY a LIMIT 10) UNION (SELECT a FROM t2 WHERE a=11 AND B=2 ORDER BY a LIMIT 10);
Use of ORDER BY for individual
SELECT statements implies nothing
about the order in which the rows appear in the final result
because UNION by default produces
an unordered set of rows. Therefore, ORDER BY
in this context typically is used in conjunction with
LIMIT, to determine the subset of the
selected rows to retrieve for the
SELECT, even though it does not
necessarily affect the order of those rows in the final
UNION result. If ORDER
BY appears without LIMIT in a
SELECT, it is optimized away
because it will have no effect, anyway.
To use an ORDER BY or
LIMIT clause to sort or limit the entire
UNION result, parenthesize the
individual SELECT statements and
place the ORDER BY or
LIMIT after the last one:
(SELECT a FROM t1 WHERE a=10 AND B=1) UNION (SELECT a FROM t2 WHERE a=11 AND B=2) ORDER BY a LIMIT 10;
A statement without parentheses is equivalent to one parenthesized as just shown.
Beginning with MySQL 8.0.19, you can use ORDER
BY and LIMIT with
TABLE in unions in the same way
as just shown, bearing in mind that TABLE
does not support a WHERE clause.
This kind of ORDER BY cannot use column
references that include a table name (that is, names in
tbl_name.col_name
format). Instead, provide a column alias in the first
SELECT statement and refer to the
alias in the ORDER BY. (Alternatively, refer
to the column in the ORDER BY using its
column position. However, use of column positions is
deprecated.)
Also, if a column to be sorted is aliased, the ORDER
BY clause must refer to the
alias, not the column name. The first of the following
statements is permitted, but the second fails with an
Unknown column 'a' in 'order clause' error:
(SELECT a AS b FROM t) UNION (SELECT ...) ORDER BY b; (SELECT a AS b FROM t) UNION (SELECT ...) ORDER BY a;
To cause rows in a UNION result
to consist of the sets of rows retrieved by each
SELECT one after the other,
select an additional column in each
SELECT to use as a sort column
and add an ORDER BY that sorts on that column
following the last SELECT:
(SELECT 1 AS sort_col, col1a, col1b, ... FROM t1) UNION (SELECT 2, col2a, col2b, ... FROM t2) ORDER BY sort_col;
To additionally maintain sort order within individual
SELECT results, add a secondary
column to the ORDER BY clause:
(SELECT 1 AS sort_col, col1a, col1b, ... FROM t1) UNION (SELECT 2, col2a, col2b, ... FROM t2) ORDER BY sort_col, col1a;
Use of an additional column also enables you to determine which
SELECT each row comes from. Extra
columns can provide other identifying information as well, such
as a string that indicates a table name.
In a UNION, the
SELECT statements are normal
select statements, but with the following restrictions:
HIGH_PRIORITY in the first
SELECT has no effect.
HIGH_PRIORITY in any subsequent
SELECT produces a syntax
error.
Only the last SELECT
statement can use an INTO clause.
However, the entire UNION
result is written to the INTO output
destination.
As of MySQL 8.0.20, these two
UNION variants containing
INTO are deprecated and support for them
will be removed in a future MySQL version:
In the trailing query block of a query expression, use
of INTO before
FROM produces a warning. Example:
... UNION SELECT * INTO OUTFILE 'file_name' FROMtable_name;
In a parenthesized trailing block of a query expression,
use of INTO (regardless of its
position relative to FROM) produces a
warning. Example:
... UNION (SELECT * INTO OUTFILE 'file_name' FROMtable_name);
Those variants are deprecated because they are confusing, as
if they collect information from the named table rather than
the entire query expression (the UNION).
UNION queries with an aggregate
function in an ORDER BY clause are rejected
with an
ER_AGGREGATE_ORDER_FOR_UNION
error. Example:
SELECT 1 AS foo UNION SELECT 2 ORDER BY MAX(1);
In MySQL 8.0, the parser rules for
SELECT and
UNION were refactored to be more
consistent (the same SELECT
syntax applies uniformly in each such context) and reduce
duplication. Compared to MySQL 5.7, several
user-visible effects resulted from this work, which may require
rewriting of certain statements:
NATURAL JOIN permits an optional
INNER keyword (NATURAL INNER
JOIN), in compliance with standard SQL.
Right-deep joins without parentheses are permitted (for
example, ... JOIN ... JOIN ... ON ...
ON), in compliance with standard SQL.
STRAIGHT_JOIN now permits a
USING clause, similar to other inner
joins.
The parser accepts parentheses around query expressions. For
example, (SELECT ... UNION SELECT ...) is
permitted.
The parser better conforms to the documented permitted
placement of the SQL_CACHE and
SQL_NO_CACHE query modifiers.
Left-hand nesting of unions, previously permitted only in subqueries, is now permitted in top-level statements. For example, this statement is now accepted as valid:
(SELECT 1 UNION SELECT 1) UNION SELECT 1;
Locking clauses (FOR UPDATE,
LOCK IN SHARE MODE) are allowed only in
non-UNION queries. This means that
parentheses must be used for SELECT
statements containing locking clauses. This statement is no
longer accepted as valid:
SELECT 1 FOR UPDATE UNION SELECT 1 FOR UPDATE;
Instead, write the statement like this:
(SELECT 1 FOR UPDATE) UNION (SELECT 1 FOR UPDATE);
A subquery is a SELECT statement
within another statement.
All subquery forms and operations that the SQL standard requires are supported, as well as a few features that are MySQL-specific.
Here is an example of a subquery:
SELECT * FROM t1 WHERE column1 = (SELECT column1 FROM t2);
In this example, SELECT * FROM t1 ... is the
outer query (or outer
statement), and (SELECT column1 FROM
t2) is the subquery. We say that
the subquery is nested within the outer
query, and in fact it is possible to nest subqueries within other
subqueries, to a considerable depth. A subquery must always appear
within parentheses.
The main advantages of subqueries are:
They allow queries that are structured so that it is possible to isolate each part of a statement.
They provide alternative ways to perform operations that would otherwise require complex joins and unions.
Many people find subqueries more readable than complex joins or unions. Indeed, it was the innovation of subqueries that gave people the original idea of calling the early SQL “Structured Query Language.”
Here is an example statement that shows the major points about subquery syntax as specified by the SQL standard and supported in MySQL:
DELETE FROM t1
WHERE s11 > ANY
(SELECT COUNT(*) /* no hint */ FROM t2
WHERE NOT EXISTS
(SELECT * FROM t3
WHERE ROW(5*t2.s1,77)=
(SELECT 50,11*s1 FROM t4 UNION SELECT 50,77 FROM
(SELECT * FROM t5) AS t5)));A subquery can return a scalar (a single value), a single row, a single column, or a table (one or more rows of one or more columns). These are called scalar, column, row, and table subqueries. Subqueries that return a particular kind of result often can be used only in certain contexts, as described in the following sections.
There are few restrictions on the type of statements in which
subqueries can be used. A subquery can contain many of the
keywords or clauses that an ordinary
SELECT can contain:
DISTINCT, GROUP BY,
ORDER BY, LIMIT, joins,
index hints, UNION constructs,
comments, functions, and so on.
Beginning with MySQL 8.0.19, TABLE
and VALUES statements can be used
in subqueries. Subqueries using
VALUES are generally more verbose
versions of subqueries that can be rewritten more compactly using
set notation, or with SELECT or
TABLE syntax; assuming that table
ts is created using the statement
CREATE TABLE
ts VALUES ROW(2), ROW(4), ROW(6), the statements shown
here are all equivalent:
SELECT * FROM tt
WHERE b > ANY (VALUES ROW(2), ROW(4), ROW(6));
SELECT * FROM tt
WHERE b > ANY (2, 4, 6);
SELECT * FROM tt
WHERE b > ANY (SELECT * FROM ts);
SELECT * FROM tt
WHERE b > ANY (TABLE ts);
Examples of TABLE subqueries are
shown in the sections that follow.
A subquery's outer statement can be any one of:
SELECT,
INSERT,
UPDATE,
DELETE,
SET, or
DO.
For information about how the optimizer handles subqueries, see Section 8.2.2, “Optimizing Subqueries, Derived Tables, View References, and Common Table Expressions”. For a discussion of restrictions on subquery use, including performance issues for certain forms of subquery syntax, see Section 13.2.11.12, “Restrictions on Subqueries”.
In its simplest form, a subquery is a scalar subquery that
returns a single value. A scalar subquery is a simple operand,
and you can use it almost anywhere a single column value or
literal is legal, and you can expect it to have those
characteristics that all operands have: a data type, a length,
an indication that it can be NULL, and so on.
For example:
CREATE TABLE t1 (s1 INT, s2 CHAR(5) NOT NULL); INSERT INTO t1 VALUES(100, 'abcde'); SELECT (SELECT s2 FROM t1);
The subquery in this SELECT
returns a single value ('abcde') that has a
data type of CHAR, a length of 5,
a character set and collation equal to the defaults in effect at
CREATE TABLE time, and an
indication that the value in the column can be
NULL. Nullability of the value selected by a
scalar subquery is not copied because if the subquery result is
empty, the result is NULL. For the subquery
just shown, if t1 were empty, the result
would be NULL even though
s2 is NOT NULL.
There are a few contexts in which a scalar subquery cannot be
used. If a statement permits only a literal value, you cannot
use a subquery. For example, LIMIT requires
literal integer arguments, and LOAD
DATA requires a literal string file name. You cannot
use subqueries to supply these values.
When you see examples in the following sections that contain the
rather spartan construct (SELECT column1 FROM
t1), imagine that your own code contains much more
diverse and complex constructions.
Suppose that we make two tables:
CREATE TABLE t1 (s1 INT); INSERT INTO t1 VALUES (1); CREATE TABLE t2 (s1 INT); INSERT INTO t2 VALUES (2);
Then perform a SELECT:
SELECT (SELECT s1 FROM t2) FROM t1;
The result is 2 because there is a row in
t2 containing a column s1
that has a value of 2.
In MySQL 8.0.19 and later, the preceding query can also be
written like this, using TABLE:
SELECT (TABLE t2) FROM t1;
A scalar subquery can be part of an expression, but remember the parentheses, even if the subquery is an operand that provides an argument for a function. For example:
SELECT UPPER((SELECT s1 FROM t1)) FROM t2;
The same result can be obtained in MySQL 8.0.19 and later using
SELECT UPPER((TABLE t1)) FROM t2.
The most common use of a subquery is in the form:
non_subquery_operandcomparison_operator(subquery)
Where comparison_operator is one of
these operators:
= > < >= <= <> != <=>
For example:
... WHERE 'a' = (SELECT column1 FROM t1)
MySQL also permits this construct:
non_subquery_operandLIKE (subquery)
At one time the only legal place for a subquery was on the right side of a comparison, and you might still find some old DBMSs that insist on this.
Here is an example of a common-form subquery comparison that you
cannot do with a join. It finds all the rows in table
t1 for which the column1
value is equal to a maximum value in table
t2:
SELECT * FROM t1 WHERE column1 = (SELECT MAX(column2) FROM t2);
Here is another example, which again is impossible with a join
because it involves aggregating for one of the tables. It finds
all rows in table t1 containing a value that
occurs twice in a given column:
SELECT * FROM t1 AS t WHERE 2 = (SELECT COUNT(*) FROM t1 WHERE t1.id = t.id);
For a comparison of the subquery to a scalar, the subquery must return a scalar. For a comparison of the subquery to a row constructor, the subquery must be a row subquery that returns a row with the same number of values as the row constructor. See Section 13.2.11.5, “Row Subqueries”.
Syntax:
operandcomparison_operatorANY (subquery)operandIN (subquery)operandcomparison_operatorSOME (subquery)
Where comparison_operator is one of
these operators:
= > < >= <= <> !=
The ANY keyword, which must follow a
comparison operator, means “return TRUE
if the comparison is TRUE for
ANY of the values in the column that the
subquery returns.” For example:
SELECT s1 FROM t1 WHERE s1 > ANY (SELECT s1 FROM t2);
Suppose that there is a row in table t1
containing (10). The expression is
TRUE if table t2 contains
(21,14,7) because there is a value
7 in t2 that is less than
10. The expression is
FALSE if table t2 contains
(20,10), or if table t2 is
empty. The expression is unknown (that is,
NULL) if table t2 contains
(NULL,NULL,NULL).
When used with a subquery, the word IN is an
alias for = ANY. Thus, these two statements
are the same:
SELECT s1 FROM t1 WHERE s1 = ANY (SELECT s1 FROM t2); SELECT s1 FROM t1 WHERE s1 IN (SELECT s1 FROM t2);
IN and = ANY are not
synonyms when used with an expression list.
IN can take an expression list, but
= ANY cannot. See
Section 12.3.2, “Comparison Functions and Operators”.
NOT IN is not an alias for <>
ANY, but for <> ALL. See
Section 13.2.11.4, “Subqueries with ALL”.
The word SOME is an alias for
ANY. Thus, these two statements are the same:
SELECT s1 FROM t1 WHERE s1 <> ANY (SELECT s1 FROM t2); SELECT s1 FROM t1 WHERE s1 <> SOME (SELECT s1 FROM t2);
Use of the word SOME is rare, but this
example shows why it might be useful. To most people, the
English phrase “a is not equal to any b” means
“there is no b which is equal to a,” but that is
not what is meant by the SQL syntax. The syntax means
“there is some b to which a is not equal.” Using
<> SOME instead helps ensure that
everyone understands the true meaning of the query.
Beginning with MySQL 8.0.19, you can use
TABLE in a scalar
IN, ANY, or
SOME subquery provided the table contains
only a single column. If t2 has only one
column, the statements shown previously in this section can be
written as shown here, in each case substituting TABLE
t2 for SELECT s1 FROM t2:
SELECT s1 FROM t1 WHERE s1 > ANY (TABLE t2); SELECT s1 FROM t1 WHERE s1 = ANY (TABLE t2); SELECT s1 FROM t1 WHERE s1 IN (TABLE t2); SELECT s1 FROM t1 WHERE s1 <> ANY (TABLE t2); SELECT s1 FROM t1 WHERE s1 <> SOME (TABLE t2);
Syntax:
operandcomparison_operatorALL (subquery)
The word ALL, which must follow a comparison
operator, means “return TRUE if the
comparison is TRUE for ALL
of the values in the column that the subquery returns.”
For example:
SELECT s1 FROM t1 WHERE s1 > ALL (SELECT s1 FROM t2);
Suppose that there is a row in table t1
containing (10). The expression is
TRUE if table t2 contains
(-5,0,+5) because 10 is
greater than all three values in t2. The
expression is FALSE if table
t2 contains
(12,6,NULL,-100) because there is a single
value 12 in table t2 that
is greater than 10. The expression is
unknown (that is, NULL)
if table t2 contains
(0,NULL,1).
Finally, the expression is TRUE if table
t2 is empty. So, the following expression is
TRUE when table t2 is
empty:
SELECT * FROM t1 WHERE 1 > ALL (SELECT s1 FROM t2);
But this expression is NULL when table
t2 is empty:
SELECT * FROM t1 WHERE 1 > (SELECT s1 FROM t2);
In addition, the following expression is NULL
when table t2 is empty:
SELECT * FROM t1 WHERE 1 > ALL (SELECT MAX(s1) FROM t2);
In general, tables containing NULL
values and empty tables are
“edge cases.” When writing subqueries, always
consider whether you have taken those two possibilities into
account.
NOT IN is an alias for <>
ALL. Thus, these two statements are the same:
SELECT s1 FROM t1 WHERE s1 <> ALL (SELECT s1 FROM t2); SELECT s1 FROM t1 WHERE s1 NOT IN (SELECT s1 FROM t2);
MySQL 8.0.19 supports the TABLE
statement. As with IN,
ANY, and SOME, you can use
TABLE with ALL and
NOT IN provided that the following two
conditions are met:
The table in the subquery contains only one column
The subquery does not depend on a column expression
For example, assuming that table t2 consists
of a single column, the last two statements shown previously can
be written using TABLE t2 like this:
SELECT s1 FROM t1 WHERE s1 <> ALL (TABLE t2); SELECT s1 FROM t1 WHERE s1 NOT IN (TABLE t2);
A query such as SELECT * FROM t1 WHERE 1 > ALL
(SELECT MAX(s1) FROM t2); cannot be written using
TABLE t2 because the subquery depends on a
column expression.
Scalar or column subqueries return a single value or a column of values. A row subquery is a subquery variant that returns a single row and can thus return more than one column value. Legal operators for row subquery comparisons are:
= > < >= <= <> != <=>
Here are two examples:
SELECT * FROM t1 WHERE (col1,col2) = (SELECT col3, col4 FROM t2 WHERE id = 10); SELECT * FROM t1 WHERE ROW(col1,col2) = (SELECT col3, col4 FROM t2 WHERE id = 10);
For both queries, if the table t2 contains a
single row with id = 10, the subquery returns
a single row. If this row has col3 and
col4 values equal to the
col1 and col2 values of
any rows in t1, the WHERE
expression is TRUE and each query returns
those t1 rows. If the t2
row col3 and col4 values
are not equal the col1 and
col2 values of any t1 row,
the expression is FALSE and the query returns
an empty result set. The expression is
unknown (that is, NULL)
if the subquery produces no rows. An error occurs if the
subquery produces multiple rows because a row subquery can
return at most one row.
For information about how each operator works for row comparisons, see Section 12.3.2, “Comparison Functions and Operators”.
The expressions (1,2) and
ROW(1,2) are sometimes called
row constructors. The two
are equivalent. The row constructor and the row returned by the
subquery must contain the same number of values.
A row constructor is used for comparisons with subqueries that return two or more columns. When a subquery returns a single column, this is regarded as a scalar value and not as a row, so a row constructor cannot be used with a subquery that does not return at least two columns. Thus, the following query fails with a syntax error:
SELECT * FROM t1 WHERE ROW(1) = (SELECT column1 FROM t2)
Row constructors are legal in other contexts. For example, the following two statements are semantically equivalent (and are handled in the same way by the optimizer):
SELECT * FROM t1 WHERE (column1,column2) = (1,1); SELECT * FROM t1 WHERE column1 = 1 AND column2 = 1;
The following query answers the request, “find all rows in
table t1 that also exist in table
t2”:
SELECT column1,column2,column3
FROM t1
WHERE (column1,column2,column3) IN
(SELECT column1,column2,column3 FROM t2);For more information about the optimizer and row constructors, see Section 8.2.1.22, “Row Constructor Expression Optimization”
If a subquery returns any rows at all, EXISTS
is
subqueryTRUE, and NOT EXISTS
is
subqueryFALSE. For example:
SELECT column1 FROM t1 WHERE EXISTS (SELECT * FROM t2);
Traditionally, an EXISTS subquery starts with
SELECT *, but it could begin with
SELECT 5 or SELECT column1
or anything at all. MySQL ignores the
SELECT list in such a subquery,
so it makes no difference.
For the preceding example, if t2 contains any
rows, even rows with nothing but NULL values,
the EXISTS condition is
TRUE. This is actually an unlikely example
because a [NOT] EXISTS subquery almost always
contains correlations. Here are some more realistic examples:
What kind of store is present in one or more cities?
SELECT DISTINCT store_type FROM stores
WHERE EXISTS (SELECT * FROM cities_stores
WHERE cities_stores.store_type = stores.store_type);What kind of store is present in no cities?
SELECT DISTINCT store_type FROM stores
WHERE NOT EXISTS (SELECT * FROM cities_stores
WHERE cities_stores.store_type = stores.store_type);What kind of store is present in all cities?
SELECT DISTINCT store_type FROM stores s1
WHERE NOT EXISTS (
SELECT * FROM cities WHERE NOT EXISTS (
SELECT * FROM cities_stores
WHERE cities_stores.city = cities.city
AND cities_stores.store_type = stores.store_type));
The last example is a double-nested NOT
EXISTS query. That is, it has a NOT
EXISTS clause within a NOT EXISTS
clause. Formally, it answers the question “does a city
exist with a store that is not in
Stores”? But it is easier to say that
a nested NOT EXISTS answers the question
“is x TRUE
for all y?”
In MySQL 8.0.19 and later, you can also use NOT
EXISTS or NOT EXISTS with
TABLE in the subquery, like this:
SELECT column1 FROM t1 WHERE EXISTS (TABLE t2);
The results are the same as when using SELECT
* with no WHERE clause in the
subquery.
A correlated subquery is a subquery that contains a reference to a table that also appears in the outer query. For example:
SELECT * FROM t1
WHERE column1 = ANY (SELECT column1 FROM t2
WHERE t2.column2 = t1.column2);
Notice that the subquery contains a reference to a column of
t1, even though the subquery's
FROM clause does not mention a table
t1. So, MySQL looks outside the subquery, and
finds t1 in the outer query.
Suppose that table t1 contains a row where
column1 = 5 and column2 =
6; meanwhile, table t2 contains a
row where column1 = 5 and column2 =
7. The simple expression ... WHERE column1 =
ANY (SELECT column1 FROM t2) would be
TRUE, but in this example, the
WHERE clause within the subquery is
FALSE (because (5,6) is
not equal to (5,7)), so the expression as a
whole is FALSE.
Scoping rule: MySQL evaluates from inside to outside. For example:
SELECT column1 FROM t1 AS x
WHERE x.column1 = (SELECT column1 FROM t2 AS x
WHERE x.column1 = (SELECT column1 FROM t3
WHERE x.column2 = t3.column1));
In this statement, x.column2 must be a column
in table t2 because SELECT column1
FROM t2 AS x ... renames t2. It is
not a column in table t1 because
SELECT column1 FROM t1 ... is an outer query
that is farther out.
This section discusses general characteristics of derived
tables. For information about lateral derived tables preceded by
the LATERAL keyword, see
Section 13.2.11.9, “Lateral Derived Tables”.
A derived table is an expression that generates a table within
the scope of a query FROM clause. For
example, a subquery in a SELECT
statement FROM clause is a derived table:
SELECT ... FROM (subquery) [AS]tbl_name...
The JSON_TABLE() function
generates a table and provides another way to create a derived
table:
SELECT * FROM JSON_TABLE(arg_list) [AS]tbl_name...
The [AS]
clause is mandatory because every table in a
tbl_nameFROM clause must have a name. Any columns in
the derived table must have unique names. Alternatively,
tbl_name may be followed by a
parenthesized list of names for the derived table columns:
SELECT ... FROM (subquery) [AS]tbl_name(col_list) ...
The number of column names must be the same as the number of table columns.
For the sake of illustration, assume that you have this table:
CREATE TABLE t1 (s1 INT, s2 CHAR(5), s3 FLOAT);
Here is how to use a subquery in the FROM
clause, using the example table:
INSERT INTO t1 VALUES (1,'1',1.0); INSERT INTO t1 VALUES (2,'2',2.0); SELECT sb1,sb2,sb3 FROM (SELECT s1 AS sb1, s2 AS sb2, s3*2 AS sb3 FROM t1) AS sb WHERE sb1 > 1;
Result:
+------+------+------+ | sb1 | sb2 | sb3 | +------+------+------+ | 2 | 2 | 4 | +------+------+------+
Here is another example: Suppose that you want to know the average of a set of sums for a grouped table. This does not work:
SELECT AVG(SUM(column1)) FROM t1 GROUP BY column1;
However, this query provides the desired information:
SELECT AVG(sum_column1)
FROM (SELECT SUM(column1) AS sum_column1
FROM t1 GROUP BY column1) AS t1;
Notice that the column name used within the subquery
(sum_column1) is recognized in the outer
query.
The column names for a derived table come from its select list:
mysql> SELECT * FROM (SELECT 1, 2, 3, 4) AS dt;
+---+---+---+---+
| 1 | 2 | 3 | 4 |
+---+---+---+---+
| 1 | 2 | 3 | 4 |
+---+---+---+---+
To provide column names explicitly, follow the derived table name with a parenthesized list of column names:
mysql> SELECT * FROM (SELECT 1, 2, 3, 4) AS dt (a, b, c, d);
+---+---+---+---+
| a | b | c | d |
+---+---+---+---+
| 1 | 2 | 3 | 4 |
+---+---+---+---+
A derived table can return a scalar, column, row, or table.
Derived tables are subject to these restrictions:
A derived table cannot contain references to other tables of
the same SELECT (use a
LATERAL derived table for that; see
Section 13.2.11.9, “Lateral Derived Tables”).
Prior to MySQL 8.0.14, a derived table cannot contain outer
references. This is a MySQL restriction that is lifted in
MySQL 8.0.14, not a restriction of the SQL standard. For
example, the derived table dt in the
following query contains a reference t1.b
to the table t1 in the outer query:
SELECT * FROM t1
WHERE t1.d > (SELECT AVG(dt.a)
FROM (SELECT SUM(t2.a) AS a
FROM t2
WHERE t2.b = t1.b GROUP BY t2.c) dt
WHERE dt.a > 10);
The query is valid in MySQL 8.0.14 and higher. Before
8.0.14, it produces an error: Unknown column 't1.b'
in 'where clause'
The optimizer determines information about derived tables in
such a way that EXPLAIN does not
need to materialize them. See
Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions
with Merging or Materialization”.
It is possible under certain circumstances that using
EXPLAIN
SELECT will modify table data. This can occur if the
outer query accesses any tables and an inner query invokes a
stored function that changes one or more rows of a table.
Suppose that there are two tables t1 and
t2 in database d1, and a
stored function f1 that modifies
t2, created as shown here:
CREATE DATABASE d1;
USE d1;
CREATE TABLE t1 (c1 INT);
CREATE TABLE t2 (c1 INT);
CREATE FUNCTION f1(p1 INT) RETURNS INT
BEGIN
INSERT INTO t2 VALUES (p1);
RETURN p1;
END;
Referencing the function directly in an
EXPLAIN
SELECT has no effect on t2, as
shown here:
mysql>SELECT * FROM t2;Empty set (0.02 sec) mysql>EXPLAIN SELECT f1(5)\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: NULL partitions: NULL type: NULL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL filtered: NULL Extra: No tables used 1 row in set (0.01 sec) mysql>SELECT * FROM t2;Empty set (0.01 sec)
This is because the SELECT
statement did not reference any tables, as can be seen in the
table and Extra columns of
the output. This is also true of the following nested
SELECT:
mysql>EXPLAIN SELECT NOW() AS a1, (SELECT f1(5)) AS a2\G*************************** 1. row *************************** id: 1 select_type: PRIMARY table: NULL type: NULL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL filtered: NULL Extra: No tables used 1 row in set, 1 warning (0.00 sec) mysql>SHOW WARNINGS;+-------+------+------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------+ | Note | 1249 | Select 2 was reduced during optimization | +-------+------+------------------------------------------+ 1 row in set (0.00 sec) mysql>SELECT * FROM t2;Empty set (0.00 sec)
However, if the outer SELECT
references any tables, the optimizer executes the statement in
the subquery as well, with the result that t2
is modified:
mysql>EXPLAIN SELECT * FROM t1 AS a1, (SELECT f1(5)) AS a2\G*************************** 1. row *************************** id: 1 select_type: PRIMARY table: <derived2> partitions: NULL type: system possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 1 filtered: 100.00 Extra: NULL *************************** 2. row *************************** id: 1 select_type: PRIMARY table: a1 partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 1 filtered: 100.00 Extra: NULL *************************** 3. row *************************** id: 2 select_type: DERIVED table: NULL partitions: NULL type: NULL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL filtered: NULL Extra: No tables used 3 rows in set (0.00 sec) mysql>SELECT * FROM t2;+------+ | c1 | +------+ | 5 | +------+ 1 row in set (0.00 sec)
This also means that an
EXPLAIN
SELECT statement such as the one shown here may take a
long time to execute because the
BENCHMARK() function is executed
once for each row in t1:
EXPLAIN SELECT * FROM t1 AS a1, (SELECT BENCHMARK(1000000, MD5(NOW())));
A derived table cannot normally refer to (depend on) columns of
preceding tables in the same FROM clause. As
of MySQL 8.0.14, a derived table may be defined as a lateral
derived table to specify that such references are permitted.
Nonlateral derived tables are specified using the syntax
discussed in Section 13.2.11.8, “Derived Tables”. The syntax for a
lateral derived table is the same as for a nonlateral derived
table except that the keyword LATERAL is
specified before the derived table specification. The
LATERAL keyword must precede each table to be
used as a lateral derived table.
Lateral derived tables are subject to these restrictions:
A lateral derived table can occur only in a
FROM clause, either in a list of tables
separated with commas or in a join specification
(JOIN, INNER JOIN,
CROSS JOIN, LEFT [OUTER]
JOIN, or RIGHT [OUTER] JOIN).
If a lateral derived table is in the right operand of a join
clause and contains a reference to the left operand, the
join operation must be an INNER JOIN,
CROSS JOIN, or LEFT [OUTER]
JOIN.
If the table is in the left operand and contains a reference
to the right operand, the join operation must be an
INNER JOIN, CROSS
JOIN, or RIGHT [OUTER] JOIN.
If a lateral derived table references an aggregate function,
the function's aggregation query cannot be the one that owns
the FROM clause in which the lateral
derived table occurs.
Per the SQL standard, a table function has an implicit
LATERAL, so it behaves as in MySQL 8.0
versions prior to 8.0.14. However, per the standard, the
LATERAL word is not allowed before
JSON_TABLE(), even though it
is implicit.
The following discussion shows how lateral derived tables make possible certain SQL operations that cannot be done with nonlateral derived tables or that require less-efficient workarounds.
Suppose that we want to solve this problem: Given a table of people in a sales force (where each row describes a member of the sales force), and a table of all sales (where each row describes a sale: salesperson, customer, amount, date), determine the size and customer of the largest sale for each salesperson. This problem can be approached two ways.
First approach to solving the problem: For each salesperson, calculate the maximum sale size, and also find the customer who provided this maximum. In MySQL, that can be done like this:
SELECT
salesperson.name,
-- find maximum sale size for this salesperson
(SELECT MAX(amount) AS amount
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id)
AS amount,
-- find customer for this maximum size
(SELECT customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
AND all_sales.amount =
-- find maximum size, again
(SELECT MAX(amount) AS amount
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id))
AS customer_name
FROM
salesperson;That query is inefficient because it calculates the maximum size twice per salesperson (once in the first subquery and once in the second).
We can try to achieve an efficiency gain by calculating the maximum once per salesperson and “caching” it in a derived table, as shown by this modified query:
SELECT
salesperson.name,
max_sale.amount,
max_sale_customer.customer_name
FROM
salesperson,
-- calculate maximum size, cache it in transient derived table max_sale
(SELECT MAX(amount) AS amount
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id)
AS max_sale,
-- find customer, reusing cached maximum size
(SELECT customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
AND all_sales.amount =
-- the cached maximum size
max_sale.amount)
AS max_sale_customer;
However, the query is illegal in SQL-92 because derived tables
cannot depend on other tables in the same
FROM clause. Derived tables must be constant
over the query's duration, not contain references to columns of
other FROM clause tables. As written, the
query produces this error:
ERROR 1054 (42S22): Unknown column 'salesperson.id' in 'where clause'
In SQL:1999, the query becomes legal if the derived tables are
preceded by the LATERAL keyword (which means
“this derived table depends on previous tables on its left
side”):
SELECT
salesperson.name,
max_sale.amount,
max_sale_customer.customer_name
FROM
salesperson,
-- calculate maximum size, cache it in transient derived table max_sale
LATERAL
(SELECT MAX(amount) AS amount
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id)
AS max_sale,
-- find customer, reusing cached maximum size
LATERAL
(SELECT customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
AND all_sales.amount =
-- the cached maximum size
max_sale.amount)
AS max_sale_customer;A lateral derived table need not be constant and is brought up to date each time a new row from a preceding table on which it depends is processed by the top query.
Second approach to solving the problem: A different solution
could be used if a subquery in the
SELECT list could return multiple
columns:
SELECT
salesperson.name,
-- find maximum size and customer at same time
(SELECT amount, customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
ORDER BY amount DESC LIMIT 1)
FROM
salesperson;That is efficient but illegal. It does not work because such subqueries can return only a single column:
ERROR 1241 (21000): Operand should contain 1 column(s)
One attempt at rewriting the query is to select multiple columns from a derived table:
SELECT
salesperson.name,
max_sale.amount,
max_sale.customer_name
FROM
salesperson,
-- find maximum size and customer at same time
(SELECT amount, customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
ORDER BY amount DESC LIMIT 1)
AS max_sale;
However, that also does not work. The derived table is dependent
on the salesperson table and thus fails
without LATERAL:
ERROR 1054 (42S22): Unknown column 'salesperson.id' in 'where clause'
Adding the LATERAL keyword makes the query
legal:
SELECT
salesperson.name,
max_sale.amount,
max_sale.customer_name
FROM
salesperson,
-- find maximum size and customer at same time
LATERAL
(SELECT amount, customer_name
FROM all_sales
WHERE all_sales.salesperson_id = salesperson.id
ORDER BY amount DESC LIMIT 1)
AS max_sale;
In short, LATERAL is the efficient solution
to all drawbacks in the two approaches just discussed.
There are some errors that apply only to subqueries. This section describes them.
Unsupported subquery syntax:
ERROR 1235 (ER_NOT_SUPPORTED_YET) SQLSTATE = 42000 Message = "This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'"
This means that MySQL does not support statements like the following:
SELECT * FROM t1 WHERE s1 IN (SELECT s2 FROM t2 ORDER BY s1 LIMIT 1)
Incorrect number of columns from subquery:
ERROR 1241 (ER_OPERAND_COL) SQLSTATE = 21000 Message = "Operand should contain 1 column(s)"
This error occurs in cases like this:
SELECT (SELECT column1, column2 FROM t2) FROM t1;
You may use a subquery that returns multiple columns, if the purpose is row comparison. In other contexts, the subquery must be a scalar operand. See Section 13.2.11.5, “Row Subqueries”.
Incorrect number of rows from subquery:
ERROR 1242 (ER_SUBSELECT_NO_1_ROW) SQLSTATE = 21000 Message = "Subquery returns more than 1 row"
This error occurs for statements where the subquery must return at most one row but returns multiple rows. Consider the following example:
SELECT * FROM t1 WHERE column1 = (SELECT column1 FROM t2);
If SELECT column1 FROM t2 returns just
one row, the previous query will work. If the subquery
returns more than one row, error 1242 will occur. In that
case, the query should be rewritten as:
SELECT * FROM t1 WHERE column1 = ANY (SELECT column1 FROM t2);
Incorrectly used table in subquery:
Error 1093 (ER_UPDATE_TABLE_USED) SQLSTATE = HY000 Message = "You can't specify target table 'x' for update in FROM clause"
This error occurs in cases such as the following, which attempts to modify a table and select from the same table in the subquery:
UPDATE t1 SET column2 = (SELECT MAX(column1) FROM t1);
You can use a common table expression or derived table to work around this. See Section 13.2.11.12, “Restrictions on Subqueries”.
In MySQL 8.0.19 and later, all of the errors described in this
section also apply when using
TABLE in subqueries.
For transactional storage engines, the failure of a subquery causes the entire statement to fail. For nontransactional storage engines, data modifications made before the error was encountered are preserved.
Development is ongoing, so no optimization tip is reliable for the long term. The following list provides some interesting tricks that you might want to play with. See also Section 8.2.2, “Optimizing Subqueries, Derived Tables, View References, and Common Table Expressions”.
Move clauses from outside to inside the subquery. For example, use this query:
SELECT * FROM t1 WHERE s1 IN (SELECT s1 FROM t1 UNION ALL SELECT s1 FROM t2);
Instead of this query:
SELECT * FROM t1 WHERE s1 IN (SELECT s1 FROM t1) OR s1 IN (SELECT s1 FROM t2);
For another example, use this query:
SELECT (SELECT column1 + 5 FROM t1) FROM t2;
Instead of this query:
SELECT (SELECT column1 FROM t1) + 5 FROM t2;
In general, you cannot modify a table and select from the same table in a subquery. For example, this limitation applies to statements of the following forms:
DELETE FROM t WHERE ... (SELECT ... FROM t ...);
UPDATE t ... WHERE col = (SELECT ... FROM t ...);
{INSERT|REPLACE} INTO t (SELECT ... FROM t ...);Exception: The preceding prohibition does not apply if for the modified table you are using a derived table and that derived table is materialized rather than merged into the outer query. (See Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions with Merging or Materialization”.) Example:
UPDATE t ... WHERE col = (SELECT * FROM (SELECT ... FROM t...) AS dt ...);
Here the result from the derived table is materialized as a
temporary table, so the relevant rows in
t have already been selected by the time
the update to t takes place.
In general, you may be able to influence the optimizer to
materialize a derived table by adding a
NO_MERGE optimizer hint.
See Section 8.9.3, “Optimizer Hints”.
Row comparison operations are only partially supported:
For ,
expr [NOT] IN
subqueryexpr can be an
n-tuple (specified using row
constructor syntax) and the subquery can return rows of
n-tuples. The permitted
syntax is therefore more specifically expressed as
row_constructor
[NOT] IN
table_subquery
For ,
expr
op {ALL|ANY|SOME}
subqueryexpr must be a scalar value
and the subquery must be a column subquery; it cannot
return multiple-column rows.
In other words, for a subquery that returns rows of
n-tuples, this is supported:
(expr_1, ...,expr_n) [NOT] INtable_subquery
But this is not supported:
(expr_1, ...,expr_n)op{ALL|ANY|SOME}subquery
The reason for supporting row comparisons for
IN but not for the others is that
IN is implemented by rewriting it as a
sequence of =
comparisons and AND operations.
This approach cannot be used for ALL,
ANY, or SOME.
Prior to MySQL 8.0.14, subqueries in the
FROM clause cannot be correlated
subqueries. They are materialized in whole (evaluated to
produce a result set) during query execution, so they cannot
be evaluated per row of the outer query. The optimizer
delays materialization until the result is needed, which may
permit materialization to be avoided. See
Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions
with Merging or Materialization”.
MySQL does not support LIMIT in
subqueries for certain subquery operators:
mysql>SELECT * FROM t1WHERE s1 IN (SELECT s2 FROM t2 ORDER BY s1 LIMIT 1);ERROR 1235 (42000): This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
MySQL permits a subquery to refer to a stored function that
has data-modifying side effects such as inserting rows into
a table. For example, if f() inserts
rows, the following query can modify data:
SELECT ... WHERE x IN (SELECT f() ...);
This behavior is an extension to the SQL standard. In MySQL,
it can produce nondeterministic results because
f() might be executed a different number
of times for different executions of a given query depending
on how the optimizer chooses to handle it.
For statement-based or mixed-format replication, one implication of this indeterminism is that such a query can produce different results on the master and its slaves.
TABLE is a DML statement introduced
in MySQL 8.0.19 which returns rows and columns of the named table.
TABLEtable_name[ORDER BYcolumn_name] [LIMITnumber[OFFSETnumber]]
The TABLE statement in some ways
acts like SELECT. Given the
existance of a table named t, the following two
statements produce identical output:
TABLE t; SELECT * FROM t;
You can order and limit the number of rows produced by
TABLE using ORDER
BY and LIMIT clauses, respectively.
These function identically to the same clauses when used with
SELECT (including an optional
OFFSET clause with LIMIT),
as you can see here:
mysql>TABLE t;+----+----+ | a | b | +----+----+ | 1 | 2 | | 6 | 7 | | 9 | 5 | | 10 | -4 | | 11 | -1 | | 13 | 3 | | 14 | 6 | +----+----+ 7 rows in set (0.00 sec) mysql>TABLE t ORDER BY b;+----+----+ | a | b | +----+----+ | 10 | -4 | | 11 | -1 | | 1 | 2 | | 13 | 3 | | 9 | 5 | | 14 | 6 | | 6 | 7 | +----+----+ 7 rows in set (0.00 sec) mysql>TABLE t LIMIT 3;+---+---+ | a | b | +---+---+ | 1 | 2 | | 6 | 7 | | 9 | 5 | +---+---+ 3 rows in set (0.00 sec) mysql>TABLE t ORDER BY b LIMIT 3;+----+----+ | a | b | +----+----+ | 10 | -4 | | 11 | -1 | | 1 | 2 | +----+----+ 3 rows in set (0.00 sec) mysql>TABLE t ORDER BY b LIMIT 3 OFFSET 2;+----+----+ | a | b | +----+----+ | 1 | 2 | | 13 | 3 | | 9 | 5 | +----+----+ 3 rows in set (0.00 sec)
TABLE differs from
SELECT in two key respects:
For limiting which table columns are returned, filtering rows
beyond what can be accomplished using ORDER BY
and LIMIT, or both, use
SELECT.
TABLE can be used with temporary
tables.
TABLE can also be used in place of
SELECT in a number of other
constructs, including those listed here:
With UNION, as shown here:
mysql>TABLE t1;+---+----+ | a | b | +---+----+ | 2 | 10 | | 5 | 3 | | 7 | 8 | +---+----+ 3 rows in set (0.00 sec) mysql>TABLE t2;+---+---+ | a | b | +---+---+ | 1 | 2 | | 3 | 4 | | 6 | 7 | +---+---+ 3 rows in set (0.00 sec) mysql>TABLE t1 UNION TABLE t2;+---+----+ | a | b | +---+----+ | 2 | 10 | | 5 | 3 | | 7 | 8 | | 1 | 2 | | 3 | 4 | | 6 | 7 | +---+----+ 6 rows in set (0.00 sec)
The UNION just shown is
equivalent to the following statement:
mysql> SELECT * FROM t1 UNION SELECT * FROM t2;
+---+----+
| a | b |
+---+----+
| 2 | 10 |
| 5 | 3 |
| 7 | 8 |
| 1 | 2 |
| 3 | 4 |
| 6 | 7 |
+---+----+
6 rows in set (0.00 sec)
TABLE can also be used together in unions
with SELECT statements,
VALUES statements, or both. See
Section 13.2.10.3, “UNION Clause”.
With INTO to populate user variables, and
with INTO OUTFILE or INTO
DUMPFILE to write table data to a file. See
Section 13.2.10.1, “SELECT ... INTO Statement”, for more specific information
and examples.
In many cases where you can employ subqueries. Given any table
t1 with a column named
a, and a second table t2
having a single column, statements such as the following are
possible:
SELECT * FROM t1 WHERE a IN (TABLE t2);
Assuming that the single column of table ts
is named x, the preceding is equivalent to
each of the statements shown here (and produces exactly the
same result in either case):
SELECT * FROM t1 WHERE a IN (SELECT x FROM t2); SELECT * FROM t1 WHERE a IN (SELECT * FROM t2);
See Section 13.2.11, “Subqueries”, for more information.
With INSERT and
REPLACE statements, where you
would otherwise use
SELECT *.
See Section 13.2.6.1, “INSERT ... SELECT Statement”, for more information and
examples.
TABLE can also be used in many
cases in place of the SELECT in
CREATE
TABLE ... SELECT or
CREATE VIEW ...
SELECT. See the descriptions of these statements for
more information and examples.
UPDATE is a DML statement that
modifies rows in a table.
An UPDATE statement can start with
a WITH clause to define common
table expressions accessible within the
UPDATE. See Section 13.2.15, “WITH (Common Table Expressions)”.
Single-table syntax:
UPDATE [LOW_PRIORITY] [IGNORE]table_referenceSETassignment_list[WHEREwhere_condition] [ORDER BY ...] [LIMITrow_count]value: {expr| DEFAULT}assignment:col_name=valueassignment_list:assignment[,assignment] ...
Multiple-table syntax:
UPDATE [LOW_PRIORITY] [IGNORE]table_referencesSETassignment_list[WHEREwhere_condition]
For the single-table syntax, the
UPDATE statement updates columns of
existing rows in the named table with new values. The
SET clause indicates which columns to modify
and the values they should be given. Each value can be given as an
expression, or the keyword DEFAULT to set a
column explicitly to its default value. The
WHERE clause, if given, specifies the
conditions that identify which rows to update. With no
WHERE clause, all rows are updated. If the
ORDER BY clause is specified, the rows are
updated in the order that is specified. The
LIMIT clause places a limit on the number of
rows that can be updated.
For the multiple-table syntax,
UPDATE updates rows in each table
named in table_references that satisfy
the conditions. Each matching row is updated once, even if it
matches the conditions multiple times. For multiple-table syntax,
ORDER BY and LIMIT cannot be
used.
For partitioned tables, both the single-single and multiple-table
forms of this statement support the use of a
PARTITION option as part of a table reference.
This option takes a list of one or more partitions or
subpartitions (or both). Only the partitions (or subpartitions)
listed are checked for matches, and a row that is not in any of
these partitions or subpartitions is not updated, whether it
satisfies the where_condition or not.
Unlike the case when using PARTITION with an
INSERT or
REPLACE statement, an otherwise
valid UPDATE ... PARTITION statement is
considered successful even if no rows in the listed partitions
(or subpartitions) match the
where_condition.
For more information and examples, see Section 23.5, “Partition Selection”.
where_condition is an expression that
evaluates to true for each row to be updated. For expression
syntax, see Section 9.5, “Expressions”.
table_references and
where_condition are specified as
described in Section 13.2.10, “SELECT Statement”.
You need the UPDATE privilege only
for columns referenced in an UPDATE
that are actually updated. You need only the
SELECT privilege for any columns
that are read but not modified.
The UPDATE statement supports the
following modifiers:
With the LOW_PRIORITY modifier, execution
of the UPDATE is delayed until
no other clients are reading from the table. This affects only
storage engines that use only table-level locking (such as
MyISAM, MEMORY, and
MERGE).
With the IGNORE modifier, the update
statement does not abort even if errors occur during the
update. Rows for which duplicate-key conflicts occur on a
unique key value are not updated. Rows updated to values that
would cause data conversion errors are updated to the closest
valid values instead. For more information, see
Comparison of the IGNORE Keyword and Strict SQL Mode.
UPDATE IGNORE
statements, including those having an ORDER BY
clause, are flagged as unsafe for statement-based replication.
(This is because the order in which the rows are updated
determines which rows are ignored.) Such statements produce a
warning in the error log when using statement-based mode and are
written to the binary log using the row-based format when using
MIXED mode. (Bug #11758262, Bug #50439) See
Section 17.2.1.3, “Determination of Safe and Unsafe Statements in Binary Logging”, for more
information.
If you access a column from the table to be updated in an
expression, UPDATE uses the current
value of the column. For example, the following statement sets
col1 to one more than its current value:
UPDATE t1 SET col1 = col1 + 1;
The second assignment in the following statement sets
col2 to the current (updated)
col1 value, not the original
col1 value. The result is that
col1 and col2 have the same
value. This behavior differs from standard SQL.
UPDATE t1 SET col1 = col1 + 1, col2 = col1;
Single-table UPDATE assignments are
generally evaluated from left to right. For multiple-table
updates, there is no guarantee that assignments are carried out in
any particular order.
If you set a column to the value it currently has, MySQL notices this and does not update it.
If you update a column that has been declared NOT
NULL by setting to NULL, an error
occurs if strict SQL mode is enabled; otherwise, the column is set
to the implicit default value for the column data type and the
warning count is incremented. The implicit default value is
0 for numeric types, the empty string
('') for string types, and the
“zero” value for date and time types. See
Section 11.6, “Data Type Default Values”.
If a generated column is updated explicitly, the only permitted
value is DEFAULT. For information about
generated columns, see
Section 13.1.20.8, “CREATE TABLE and Generated Columns”.
UPDATE returns the number of rows
that were actually changed. The
mysql_info() C API function
returns the number of rows that were matched and updated and the
number of warnings that occurred during the
UPDATE.
You can use LIMIT
to restrict the
scope of the row_countUPDATE. A
LIMIT clause is a rows-matched restriction. The
statement stops as soon as it has found
row_count rows that satisfy the
WHERE clause, whether or not they actually were
changed.
If an UPDATE statement includes an
ORDER BY clause, the rows are updated in the
order specified by the clause. This can be useful in certain
situations that might otherwise result in an error. Suppose that a
table t contains a column id
that has a unique index. The following statement could fail with a
duplicate-key error, depending on the order in which rows are
updated:
UPDATE t SET id = id + 1;
For example, if the table contains 1 and 2 in the
id column and 1 is updated to 2 before 2 is
updated to 3, an error occurs. To avoid this problem, add an
ORDER BY clause to cause the rows with larger
id values to be updated before those with
smaller values:
UPDATE t SET id = id + 1 ORDER BY id DESC;
You can also perform UPDATE
operations covering multiple tables. However, you cannot use
ORDER BY or LIMIT with a
multiple-table UPDATE. The
table_references clause lists the
tables involved in the join. Its syntax is described in
Section 13.2.10.2, “JOIN Clause”. Here is an example:
UPDATE items,month SET items.price=month.price WHERE items.id=month.id;
The preceding example shows an inner join that uses the comma
operator, but multiple-table UPDATE
statements can use any type of join permitted in
SELECT statements, such as
LEFT JOIN.
If you use a multiple-table UPDATE
statement involving InnoDB tables for which
there are foreign key constraints, the MySQL optimizer might
process tables in an order that differs from that of their
parent/child relationship. In this case, the statement fails and
rolls back. Instead, update a single table and rely on the
ON UPDATE capabilities that
InnoDB provides to cause the other tables to be
modified accordingly. See
Section 13.1.20.5, “FOREIGN KEY Constraints”.
You cannot update a table and select directly from the same table
in a subquery. You can work around this by using a multi-table
update in which one of the tables is derived from the table that
you actually wish to update, and referring to the derived table
using an alias. Suppose you wish to update a table named
items which is defined using the statement
shown here:
CREATE TABLE items (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
wholesale DECIMAL(6,2) NOT NULL DEFAULT 0.00,
retail DECIMAL(6,2) NOT NULL DEFAULT 0.00,
quantity BIGINT NOT NULL DEFAULT 0
);
To reduce the retail price of any items for which the markup is
30% or greater and of which you have fewer than one hundred in
stock, you might try to use an UPDATE statement
such as the one following, which uses a subquery in the
WHERE clause. As shown here, this statement
does not work:
mysql>UPDATE items>SET retail = retail * 0.9>WHERE id IN>(SELECT id FROM items>WHERE retail / wholesale >= 1.3 AND quantity > 100);ERROR 1093 (HY000): You can't specify target table 'items' for update in FROM clause
Instead, you can employ a multi-table update in which the subquery
is moved into the list of tables to be updated, using an alias to
reference it in the outermost WHERE clause,
like this:
UPDATE items,
(SELECT id FROM items
WHERE id IN
(SELECT id FROM items
WHERE retail / wholesale >= 1.3 AND quantity < 100))
AS discounted
SET items.retail = items.retail * 0.9
WHERE items.id = discounted.id;
Because the optimizer tries by default to merge the derived table
discounted into the outermost query block, this
works only if you force materialization of the derived table. You
can do this by setting the derived_merge flag
of the optimizer_switch system
variable to off before running the update, or
by using the NO_MERGE optimizer
hint, as shown here:
UPDATE /*+ NO_MERGE(discounted) */ items,
(SELECT id FROM items
WHERE retail / wholesale >= 1.3 AND quantity < 100)
AS discounted
SET items.retail = items.retail * 0.9
WHERE items.id = discounted.id;
The advantage of using the optimizer hint in such a case is that
it applies only within the query block where it is used, so that
it is not necessary to change the value of
optimizer_switch again after executing the
UPDATE.
Another possibility is to rewrite the subquery so that it does not
use IN or EXISTS, like this:
UPDATE items,
(SELECT id, retail / wholesale AS markup, quantity FROM items)
AS discounted
SET items.retail = items.retail * 0.9
WHERE discounted.markup >= 1.3
AND discounted.quantity < 100
AND items.id = discounted.id;In this case, the subquery is materialized by default rather than merged, so it is not necessary to disable merging of the derived table.
VALUES is a DML statement
introduced in MySQL 8.0.19 which returns a set of one or more rows
as a table. In other words, it is a table value constructor which
also functions as a standalone SQL statement.
VALUESrow_constructor_list[ORDER BYcolumn_designator] [LIMIT BYnumber]row_constructor_list: ROW(value_list)[, ROW(value_list)][, ...]value_list:value[,value][, ...]column_designator: column_index
The VALUES statement consists of
the VALUES keyword followed by a list of one or
more row constructors, separated by commas. A row constructor
consists of the ROW() row constructor clause
with a value list of one or more scalar values enclosed in the
parentheses. A value can be a literal of any MySQL data type or an
expression that resolves to a scalar value.
ROW() cannot be empty (but each of the supplied
scalar values can be NULL). Each
ROW() in the same
VALUES statement must have the same
number of values in its value list.
The DEFAULT keyword is not supported by
VALUES and causes a syntax error, except when
it is used to supply values in an
INSERT statement.
The output of VALUES is a table:
mysql> VALUES ROW(1,-2,3), ROW(5,7,9), ROW(4,6,8);
+----------+----------+----------+
| column_0 | column_1 | column_2 |
+----------+----------+----------+
| 1 | -2 | 3 |
| 5 | 7 | 9 |
| 4 | 6 | 8 |
+----------+----------+----------+
3 rows in set (0.00 sec)
The columns of the table output from
VALUES have the implicitly named
columns column_0, column_1,
column_2, and so on, always beginning with
0. This fact can be used to order the rows by
column using an optional ORDER BY clause in the
same way that this clause works with a
SELECT statement, as shown here:
mysql> VALUES ROW(1,-2,3), ROW(5,7,9), ROW(4,6,8) ORDER BY column_1;
+----------+----------+----------+
| column_0 | column_1 | column_2 |
+----------+----------+----------+
| 1 | -2 | 3 |
| 4 | 6 | 8 |
| 5 | 7 | 9 |
+----------+----------+----------+
3 rows in set (0.00 sec)
The VALUES statement also supports
a LIMIT clause for limiting the number of rows
in the output.
The VALUES statement is permissive regarding
data types of column values; you can mix types within the same
column, as shown here:
mysql>VALUES ROW("q", 42, '2019-12-18'),->ROW(23, "abc", 98.6),->ROW(27.0002, "Mary Smith", '{"a": 10, "b": 25}');+----------+------------+--------------------+ | column_0 | column_1 | column_2 | +----------+------------+--------------------+ | q | 42 | 2019-12-18 | | 23 | abc | 98.6 | | 27.0002 | Mary Smith | {"a": 10, "b": 25} | +----------+------------+--------------------+ 3 rows in set (0.00 sec)
VALUES with one or more instances of
ROW() acts as a table value constructor;
although it can be used to supply values in an
INSERT or
REPLACE statement, do not confuse
it with the VALUES keyword that is also used
for this purpose. You should also not confuse it with the
VALUES() function that refers to
column values in
INSERT ...
ON DUPLICATE KEY UPDATE.
You should also bear in mind that ROW() is a
row value constructor (see Section 13.2.11.5, “Row Subqueries”,
whereas VALUES ROW() is a table value
constructor; the two cannot be used interchangeably.
VALUES can be used in many cases
where you could employ SELECT,
including those listed here:
With UNION, as shown here:
mysql>SELECT 1,2 UNION SELECT 10,15;+----+----+ | 1 | 2 | +----+----+ | 1 | 2 | | 10 | 15 | +----+----+ 2 rows in set (0.00 sec) mysql>VALUES ROW(1,2) UNION VALUES ROW(10,15);+----------+----------+ | column_0 | column_1 | +----------+----------+ | 1 | 2 | | 10 | 15 | +----------+----------+ 2 rows in set (0.00 sec)
It is also possible in this fashion to union together constructed tables having more than one row, like this:
mysql>VALUES ROW(1,2), ROW(3,4), ROW(5,6)>UNION VALUES ROW(10,15),ROW(20,25);+----------+----------+ | column_0 | column_1 | +----------+----------+ | 1 | 2 | | 3 | 4 | | 5 | 6 | | 10 | 15 | | 20 | 25 | +----------+----------+ 5 rows in set (0.00 sec)
You can also (and it is usually preferable to) omit
UNION altogether in such cases
and use a single VALUES statement, like
this:
mysql> VALUES ROW(1,2), ROW(3,4), ROW(5,6), ROW(10,15), ROW(20,25);
+----------+----------+
| column_0 | column_1 |
+----------+----------+
| 1 | 2 |
| 3 | 4 |
| 5 | 6 |
| 10 | 15 |
| 20 | 25 |
+----------+----------+
VALUES can also be used in unions with
SELECT statements,
TABLE statements, or both.
The constructed tables in the
UNION must contain the same
number of columns, just as if you were using
SELECT. See
Section 13.2.10.3, “UNION Clause”, for further examples.
In joins. See Section 13.2.10.2, “JOIN Clause”, for more information and examples.
In place of VALUES() in an
INSERT or
REPLACE statement, in which
case its semantics differ slightly from what is described
here. See Section 13.2.6, “INSERT Statement”, for details.
In place of the source table in
CREATE
TABLE ... SELECT and
CREATE VIEW ...
SELECT. See the descriptions of these statements for
more information and examples.
A common table expression (CTE) is a named temporary result set that exists within the scope of a single statement and that can be referred to later within that statement, possibly multiple times. The following discussion describes how to write statements that use CTEs.
For information about CTE optimization, see Section 8.2.2.4, “Optimizing Derived Tables, View References, and Common Table Expressions with Merging or Materialization”.
These articles contain additional information about using CTEs in MySQL, including many examples:
To specify common table expressions, use a
WITH clause that has one or more
comma-separated subclauses. Each subclause provides a subquery
that produces a result set, and associates a name with the
subquery. The following example defines CTEs named
cte1 and cte2 in the
WITH clause, and refers to them
in the top-level SELECT that
follows the WITH clause:
WITH cte1 AS (SELECT a, b FROM table1), cte2 AS (SELECT c, d FROM table2) SELECT b, d FROM cte1 JOIN cte2 WHERE cte1.a = cte2.c;
In the statement containing the
WITH clause, each CTE name can be
referenced to access the corresponding CTE result set.
A CTE name can be referenced in other CTEs, enabling CTEs to be defined based on other CTEs.
A CTE can refer to itself to define a recursive CTE. Common applications of recursive CTEs include series generation and traversal of hierarchical or tree-structured data.
Common table expressions are an optional part of the syntax for
DML statements. They are defined using a
WITH clause:
with_clause: WITH [RECURSIVE]cte_name[(col_name[,col_name] ...)] AS (subquery) [,cte_name[(col_name[,col_name] ...)] AS (subquery)] ...
cte_name names a single common table
expression and can be used as a table reference in the statement
containing the WITH clause.
The subquery part of AS
( is called the
“subquery of the CTE” and is what produces the CTE
result set. The parentheses following subquery)AS are
required.
A common table expression is recursive if its subquery refers to
its own name. The RECURSIVE keyword must be
included if any CTE in the WITH
clause is recursive. For more information, see
Recursive Common Table Expressions.
Determination of column names for a given CTE occurs as follows:
If a parenthesized list of names follows the CTE name, those names are the column names:
WITH cte (col1, col2) AS ( SELECT 1, 2 UNION ALL SELECT 3, 4 ) SELECT col1, col2 FROM cte;
The number of names in the list must be the same as the number of columns in the result set.
Otherwise, the column names come from the select list of the
first SELECT within the
AS (
part:
subquery)
WITH cte AS ( SELECT 1 AS col1, 2 AS col2 UNION ALL SELECT 3, 4 ) SELECT col1, col2 FROM cte;
A WITH clause is permitted in
these contexts:
At the beginning of SELECT,
UPDATE, and
DELETE statements.
WITH ... SELECT ... WITH ... UPDATE ... WITH ... DELETE ...
At the beginning of subqueries (including derived table subqueries):
SELECT ... WHERE id IN (WITH ... SELECT ...) ... SELECT * FROM (WITH ... SELECT ...) AS dt ...
Immediately preceding SELECT
for statements that include a
SELECT statement:
INSERT ... WITH ... SELECT ... REPLACE ... WITH ... SELECT ... CREATE TABLE ... WITH ... SELECT ... CREATE VIEW ... WITH ... SELECT ... DECLARE CURSOR ... WITH ... SELECT ... EXPLAIN ... WITH ... SELECT ...
Only one WITH clause is permitted
at the same level. WITH followed
by WITH at the same level is not
permitted, so this is illegal:
WITH cte1 AS (...) WITH cte2 AS (...) SELECT ...
To make the statement legal, use a single
WITH clause that separates the
subclauses by a comma:
WITH cte1 AS (...), cte2 AS (...) SELECT ...
However, a statement can contain multiple
WITH clauses if they occur at
different levels:
WITH cte1 AS (SELECT 1) SELECT * FROM (WITH cte2 AS (SELECT 2) SELECT * FROM cte2 JOIN cte1) AS dt;
A WITH clause can define one or
more common table expressions, but each CTE name must be unique
to the clause. This is illegal:
WITH cte1 AS (...), cte1 AS (...) SELECT ...
To make the statement legal, define the CTEs with unique names:
WITH cte1 AS (...), cte2 AS (...) SELECT ...
A CTE can refer to itself or to other CTEs:
A self-referencing CTE is recursive.
A CTE can refer to CTEs defined earlier in the same
WITH clause, but not those
defined later.
This constraint rules out mutually-recursive CTEs, where
cte1 references cte2
and cte2 references
cte1. One of those references must be to
a CTE defined later, which is not permitted.
A CTE in a given query block can refer to CTEs defined in query blocks at a more outer level, but not CTEs defined in query blocks at a more inner level.
For resolving references to objects with the same names, derived
tables hide CTEs; and CTEs hide base tables,
TEMPORARY tables, and views. Name resolution
occurs by searching for objects in the same query block, then
proceeding to outer blocks in turn while no object with the name
is found.
Like derived tables, a CTE cannot contain outer references prior to MySQL 8.0.14. This is a MySQL restriction that is lifted in MySQL 8.0.14, not a restriction of the SQL standard. For additional syntax considerations specific to recursive CTEs, see Recursive Common Table Expressions.
A recursive common table expression is one having a subquery that refers to its own name. For example:
WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 5 ) SELECT * FROM cte;
When executed, the statement produces this result, a single column containing a simple linear sequence:
+------+ | n | +------+ | 1 | | 2 | | 3 | | 4 | | 5 | +------+
A recursive CTE has this structure:
The WITH clause must begin with
WITH RECURSIVE if any CTE in the
WITH clause refers to itself. (If no CTE
refers to itself, RECURSIVE is permitted
but not required.)
If you forget RECURSIVE for a recursive
CTE, this error is a likely result:
ERROR 1146 (42S02): Table 'cte_name' doesn't exist
The recursive CTE subquery has two parts, separated by
UNION [ALL]
or UNION
DISTINCT:
SELECT ... -- return initial row set UNION ALL SELECT ... -- return additional row sets
The first SELECT produces the
initial row or rows for the CTE and does not refer to the
CTE name. The second SELECT
produces additional rows and recurses by referring to the
CTE name in its FROM clause. Recursion
ends when this part produces no new rows. Thus, a recursive
CTE consists of a nonrecursive
SELECT part followed by a
recursive SELECT part.
Each SELECT part can itself
be a union of multiple SELECT
statements.
The types of the CTE result columns are inferred from the
column types of the nonrecursive
SELECT part only, and the
columns are all nullable. For type determination, the
recursive SELECT part is
ignored.
If the nonrecursive and recursive parts are separated by
UNION
DISTINCT, duplicate rows are eliminated. This is
useful for queries that perform transitive closures, to
avoid infinite loops.
Each iteration of the recursive part operates only on the rows produced by the previous iteration. If the recursive part has multiple query blocks, iterations of each query block are scheduled in unspecified order, and each query block operates on rows that have been produced either by its previous iteration or by other query blocks since that previous iteration's end.
The recursive CTE subquery shown earlier has this nonrecursive part that retrieves a single row to produce the initial row set:
SELECT 1
The CTE subquery also has this recursive part:
SELECT n + 1 FROM cte WHERE n < 5
At each iteration, that SELECT
produces a row with a new value one greater than the value of
n from the previous row set. The first
iteration operates on the initial row set (1)
and produces 1+1=2; the second iteration
operates on the first iteration's row set (2)
and produces 2+1=3; and so forth. This
continues until recursion ends, which occurs when
n is no longer less than 5.
If the recursive part of a CTE produces wider values for a column than the nonrecursive part, it may be necessary to widen the column in the nonrecursive part to avoid data truncation. Consider this statement:
WITH RECURSIVE cte AS ( SELECT 1 AS n, 'abc' AS str UNION ALL SELECT n + 1, CONCAT(str, str) FROM cte WHERE n < 3 ) SELECT * FROM cte;
In nonstrict SQL mode, the statement produces this output:
+------+------+ | n | str | +------+------+ | 1 | abc | | 2 | abc | | 3 | abc | +------+------+
The str column values are all
'abc' because the nonrecursive
SELECT determines the column
widths. Consequently, the wider str values
produced by the recursive SELECT
are truncated.
In strict SQL mode, the statement produces an error:
ERROR 1406 (22001): Data too long for column 'str' at row 1
To address this issue, so that the statement does not produce
truncation or errors, use CAST()
in the nonrecursive SELECT to
make the str column wider:
WITH RECURSIVE cte AS
(
SELECT 1 AS n, CAST('abc' AS CHAR(20)) AS str
UNION ALL
SELECT n + 1, CONCAT(str, str) FROM cte WHERE n < 3
)
SELECT * FROM cte;Now the statement produces this result, without truncation:
+------+--------------+ | n | str | +------+--------------+ | 1 | abc | | 2 | abcabc | | 3 | abcabcabcabc | +------+--------------+
Columns are accessed by name, not position, which means that columns in the recursive part can access columns in the nonrecursive part that have a different position, as this CTE illustrates:
WITH RECURSIVE cte AS ( SELECT 1 AS n, 1 AS p, -1 AS q UNION ALL SELECT n + 1, q * 2, p * 2 FROM cte WHERE n < 5 ) SELECT * FROM cte;
Because p in one row is derived from
q in the previous row, and vice versa, the
positive and negative values swap positions in each successive
row of the output:
+------+------+------+ | n | p | q | +------+------+------+ | 1 | 1 | -1 | | 2 | -2 | 2 | | 3 | 4 | -4 | | 4 | -8 | 8 | | 5 | 16 | -16 | +------+------+------+
Some syntax constraints apply within recursive CTE subqueries:
The recursive SELECT part
must not contain these constructs:
Aggregate functions such as SUM()
Window functions
GROUP BY
ORDER BY
DISTINCT
Prior to MySQL 8.0.19, the recursive
SELECT part of a recursive
CTE also could not use a LIMIT clause.
This restriction is lifted in MySQL 8.0.19, and
LIMIT is now supported in such cases,
along with an optional OFFSET clause. The
effect on the result set is the same as when using
LIMIT in the outermost
SELECT, but is also more efficient, since
using it with the recursive SELECT stops
the generation of rows as soon as the requested number of
them has been produced.
These constraints do not apply to the nonrecursive
SELECT part of a recursive
CTE. The prohibition on DISTINCT applies
only to UNION members;
UNION DISTINCT is permitted.
The recursive SELECT part
must reference the CTE only once and only in its
FROM clause, not in any subquery. It can
reference tables other than the CTE and join them with the
CTE. If used in a join like this, the CTE must not be on the
right side of a LEFT JOIN.
These constraints come from the SQL standard, other than the
MySQL-specific exclusions of ORDER BY,
LIMIT (MySQL 8.0.18 and earlier), and
DISTINCT.
For recursive CTEs, EXPLAIN
output rows for recursive SELECT
parts display Recursive in the
Extra column.
Cost estimates displayed by
EXPLAIN represent cost per
iteration, which might differ considerably from total cost. The
optimizer cannot predict the number of iterations because it
cannot predict when the WHERE clause will
become false.
CTE actual cost may also be affected by result set size. A CTE that produces many rows may require an internal temporary table large enough to be converted from in-memory to on-disk format and may suffer a performance penalty. If so, increasing the permitted in-memory temporary table size may improve performance; see Section 8.4.4, “Internal Temporary Table Use in MySQL”.
It is important for recursive CTEs that the recursive
SELECT part include a condition
to terminate recursion. As a development technique to guard
against a runaway recursive CTE, you can force termination by
placing a limit on execution time:
The cte_max_recursion_depth
system variable enforces a limit on the number of recursion
levels for CTEs. The server terminates execution of any CTE
that recurses more levels than the value of this variable.
The max_execution_time
system variable enforces an execution timeout for
SELECT statements executed
within the current session.
The MAX_EXECUTION_TIME
optimizer hint enforces a per-query execution timeout for
the SELECT statement in which
it appears.
Suppose that a recursive CTE is mistakenly written with no recursion execution termination condition:
WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte ) SELECT * FROM cte;
By default,
cte_max_recursion_depth has a
value of 1000, causing the CTE to terminate when it recurses
past 1000 levels. Applications can change the session value to
adjust for their requirements:
SET SESSION cte_max_recursion_depth = 10; -- permit only shallow recursion SET SESSION cte_max_recursion_depth = 1000000; -- permit deeper recursion
You can also set the global
cte_max_recursion_depth value
to affect all sessions that begin subsequently.
For queries that execute and thus recurse slowly or in contexts
for which there is reason to set the
cte_max_recursion_depth value
very high, another way to guard against deep recursion is to set
a per-session timeout. To do so, execute a statement like this
prior to executing the CTE statement:
SET max_execution_time = 1000; -- impose one second timeout
Alternatively, include an optimizer hint within the CTE statement itself:
WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte ) SELECT /*+ SET_VAR(cte_max_recursion_depth = 1M) */ * FROM cte; WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte ) SELECT /*+ MAX_EXECUTION_TIME(1000) */ * FROM cte;
Beginning with MySQL 8.0.19, you can also use
LIMIT within the recursive query to impose a
maximum nuber of rows to be returned to the outermost
SELECT, for example:
WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte LIMIT 10000 ) SELECT * FROM cte;
You can do this in addition to or instead of setting a time limit. Thus, the following CTE terminates after returning ten thousand rows or running for one thousand seconds, whichever occurs first:
WITH RECURSIVE cte (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cte LIMIT 10000 ) SELECT /*+ MAX_EXECUTION_TIME(1000) */ * FROM cte;
If a recursive query without an execution time limit enters an
infinite loop, you can terminate it from another session using
KILL QUERY.
Within the session itself, the client program used to run the
query might provide a way to kill the query. For example, in
mysql, typing Control+C
interrupts the current statement.
As mentioned previously, recursive common table expressions (CTEs) are frequently used for series generation and traversing hierarchical or tree-structured data. This section shows some simple examples of these techniques.
A Fibonacci series begins with the two numbers 0 and 1 (or 1 and
1) and each number after that is the sum of the previous two
numbers. A recursive common table expression can generate a
Fibonacci series if each row produced by the recursive
SELECT has access to the two
previous numbers from the series. The following CTE generates a
10-number series using 0 and 1 as the first two numbers:
WITH RECURSIVE fibonacci (n, fib_n, next_fib_n) AS
(
SELECT 1, 0, 1
UNION ALL
SELECT n + 1, next_fib_n, fib_n + next_fib_n
FROM fibonacci WHERE n < 10
)
SELECT * FROM fibonacci;The CTE produces this result:
+------+-------+------------+ | n | fib_n | next_fib_n | +------+-------+------------+ | 1 | 0 | 1 | | 2 | 1 | 1 | | 3 | 1 | 2 | | 4 | 2 | 3 | | 5 | 3 | 5 | | 6 | 5 | 8 | | 7 | 8 | 13 | | 8 | 13 | 21 | | 9 | 21 | 34 | | 10 | 34 | 55 | +------+-------+------------+
How the CTE works:
n is a display column to indicate that
the row contains the n-th Fibonacci
number. For example, the 8th Fibonacci number is 13.
The fib_n column displays Fibonacci
number n.
The next_fib_n column displays the next
Fibonacci number after number n. This
column provides the next series value to the next row, so
that row can produce the sum of the two previous series
values in its fib_n column.
Recursion ends when n reaches 10. This is
an arbitrary choice, to limit the output to a small set of
rows.
The preceding output shows the entire CTE result. To select just
part of it, add an appropriate WHERE clause
to the top-level SELECT. For
example, to select the 8th Fibonacci number, do this:
mysql>WITH RECURSIVE fibonacci ......SELECT fib_n FROM fibonacci WHERE n = 8;+-------+ | fib_n | +-------+ | 13 | +-------+
A common table expression can generate a series of successive dates, which is useful for generating summaries that include a row for all dates in the series, including dates not represented in the summarized data.
Suppose that a table of sales numbers contains these rows:
mysql> SELECT * FROM sales ORDER BY date, price;
+------------+--------+
| date | price |
+------------+--------+
| 2017-01-03 | 100.00 |
| 2017-01-03 | 200.00 |
| 2017-01-06 | 50.00 |
| 2017-01-08 | 10.00 |
| 2017-01-08 | 20.00 |
| 2017-01-08 | 150.00 |
| 2017-01-10 | 5.00 |
+------------+--------+
This query summarizes the sales per day:
mysql>SELECT date, SUM(price) AS sum_priceFROM salesGROUP BY dateORDER BY date;+------------+-----------+ | date | sum_price | +------------+-----------+ | 2017-01-03 | 300.00 | | 2017-01-06 | 50.00 | | 2017-01-08 | 180.00 | | 2017-01-10 | 5.00 | +------------+-----------+
However, that result contains “holes” for dates not
represented in the range of dates spanned by the table. A result
that represents all dates in the range can be produced using a
recursive CTE to generate that set of dates, joined with a
LEFT JOIN to the sales data.
Here is the CTE to generate the date range series:
WITH RECURSIVE dates (date) AS ( SELECT MIN(date) FROM sales UNION ALL SELECT date + INTERVAL 1 DAY FROM dates WHERE date + INTERVAL 1 DAY <= (SELECT MAX(date) FROM sales) ) SELECT * FROM dates;
The CTE produces this result:
+------------+ | date | +------------+ | 2017-01-03 | | 2017-01-04 | | 2017-01-05 | | 2017-01-06 | | 2017-01-07 | | 2017-01-08 | | 2017-01-09 | | 2017-01-10 | +------------+
How the CTE works:
Joining the CTE with a LEFT JOIN against the
sales table produces the sales summary with a
row for each date in the range:
WITH RECURSIVE dates (date) AS ( SELECT MIN(date) FROM sales UNION ALL SELECT date + INTERVAL 1 DAY FROM dates WHERE date + INTERVAL 1 DAY <= (SELECT MAX(date) FROM sales) ) SELECT dates.date, COALESCE(SUM(price), 0) AS sum_price FROM dates LEFT JOIN sales ON dates.date = sales.date GROUP BY dates.date ORDER BY dates.date;
The output looks like this:
+------------+-----------+ | date | sum_price | +------------+-----------+ | 2017-01-03 | 300.00 | | 2017-01-04 | 0.00 | | 2017-01-05 | 0.00 | | 2017-01-06 | 50.00 | | 2017-01-07 | 0.00 | | 2017-01-08 | 180.00 | | 2017-01-09 | 0.00 | | 2017-01-10 | 5.00 | +------------+-----------+
Some points to note:
Are the queries inefficient, particularly the one with the
MAX() subquery executed for
each row in the recursive
SELECT?
EXPLAIN shows that the
subquery containing MAX() is
evaluated only once and the result is cached.
The use of COALESCE() avoids
displaying NULL in the
sum_price column on days for which no
sales data occur in the sales table.
Recursive common table expressions are useful for traversing
data that forms a hierarchy. Consider these statements that
create a small data set that shows, for each employee in a
company, the employee name and ID number, and the ID of the
employee's manager. The top-level employee (the CEO), has a
manager ID of NULL (no manager).
CREATE TABLE employees ( id INT PRIMARY KEY NOT NULL, name VARCHAR(100) NOT NULL, manager_id INT NULL, INDEX (manager_id), FOREIGN KEY (manager_id) REFERENCES employees (id) ); INSERT INTO employees VALUES (333, "Yasmina", NULL), # Yasmina is the CEO (manager_id is NULL) (198, "John", 333), # John has ID 198 and reports to 333 (Yasmina) (692, "Tarek", 333), (29, "Pedro", 198), (4610, "Sarah", 29), (72, "Pierre", 29), (123, "Adil", 692);
The resulting data set looks like this:
mysql> SELECT * FROM employees ORDER BY id;
+------+---------+------------+
| id | name | manager_id |
+------+---------+------------+
| 29 | Pedro | 198 |
| 72 | Pierre | 29 |
| 123 | Adil | 692 |
| 198 | John | 333 |
| 333 | Yasmina | NULL |
| 692 | Tarek | 333 |
| 4610 | Sarah | 29 |
+------+---------+------------+
To produce the organizational chart with the management chain for each employee (that is, the path from CEO to employee), use a recursive CTE:
WITH RECURSIVE employee_paths (id, name, path) AS
(
SELECT id, name, CAST(id AS CHAR(200))
FROM employees
WHERE manager_id IS NULL
UNION ALL
SELECT e.id, e.name, CONCAT(ep.path, ',', e.id)
FROM employee_paths AS ep JOIN employees AS e
ON ep.id = e.manager_id
)
SELECT * FROM employee_paths ORDER BY path;The CTE produces this output:
+------+---------+-----------------+ | id | name | path | +------+---------+-----------------+ | 333 | Yasmina | 333 | | 198 | John | 333,198 | | 29 | Pedro | 333,198,29 | | 4610 | Sarah | 333,198,29,4610 | | 72 | Pierre | 333,198,29,72 | | 692 | Tarek | 333,692 | | 123 | Adil | 333,692,123 | +------+---------+-----------------+
How the CTE works:
The nonrecursive SELECT
produces the row for the CEO (the row with a
NULL manager ID).
The path column is widened to
CHAR(200) to ensure that there is room
for the longer path values produced by
the recursive SELECT.
Each row produced by the recursive
SELECT finds all employees
who report directly to an employee produced by a previous
row. For each such employee, the row includes the employee
ID and name, and the employee management chain. The chain is
the manager's chain, with the employee ID added to the end.
Recursion ends when employees have no others who report to them.
To find the path for a specific employee or employees, add a
WHERE clause to the top-level
SELECT. For example, to display
the results for Tarek and Sarah, modify that
SELECT like this:
mysql>WITH RECURSIVE ......SELECT * FROM employees_extendedWHERE id IN (692, 4610)ORDER BY path;+------+-------+-----------------+ | id | name | path | +------+-------+-----------------+ | 4610 | Sarah | 333,198,29,4610 | | 692 | Tarek | 333,692 | +------+-------+-----------------+
Common table expressions (CTEs) are similar to derived tables in some ways:
Both constructs are named.
Both constructs exist for the scope of a single statement.
Because of these similarities, CTEs and derived tables often can be used interchangeably. As a trivial example, these statements are equivalent:
WITH cte AS (SELECT 1) SELECT * FROM cte; SELECT * FROM (SELECT 1) AS dt;
However, CTEs have some advantages over derived tables:
A derived table can be referenced only a single time within a query. A CTE can be referenced multiple times. To use multiple instances of a derived table result, you must derive the result multiple times.
A CTE can be self-referencing (recursive).
One CTE can refer to another.
A CTE may be easier to read when its definition appears at the beginning of the statement rather than embedded within it.
CTEs are similar to tables created with
CREATE [TEMPORARY]
TABLE but need not be defined or dropped explicitly.
For a CTE, you need no privileges to create tables.
MySQL supports local transactions (within a given client session)
through statements such as
SET autocommit,
START TRANSACTION,
COMMIT, and
ROLLBACK. See
Section 13.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Statements”. XA transaction support enables MySQL to
participate in distributed transactions as well. See
Section 13.3.8, “XA Transactions”.
START TRANSACTION
[transaction_characteristic [, transaction_characteristic] ...]
transaction_characteristic: {
WITH CONSISTENT SNAPSHOT
| READ WRITE
| READ ONLY
}
BEGIN [WORK]
COMMIT [WORK] [AND [NO] CHAIN] [[NO] RELEASE]
ROLLBACK [WORK] [AND [NO] CHAIN] [[NO] RELEASE]
SET autocommit = {0 | 1}
These statements provide control over use of transactions:
START TRANSACTION or
BEGIN start a new transaction.
COMMIT commits the current transaction,
making its changes permanent.
ROLLBACK rolls back the current
transaction, canceling its changes.
SET autocommit disables or enables the
default autocommit mode for the current session.
By default, MySQL runs with
autocommit mode enabled.
This means that, when not otherwise inside a transaction, each
statement is atomic, as if it were surrounded by START
TRANSACTION and COMMIT. You cannot
use ROLLBACK to undo the effect; however, if an
error occurs during statement execution, the statement is rolled
back.
To disable autocommit mode implicitly for a single series of
statements, use the START TRANSACTION
statement:
START TRANSACTION; SELECT @A:=SUM(salary) FROM table1 WHERE type=1; UPDATE table2 SET summary=@A WHERE type=1; COMMIT;
With START TRANSACTION, autocommit remains
disabled until you end the transaction with
COMMIT or ROLLBACK. The
autocommit mode then reverts to its previous state.
START TRANSACTION permits several modifiers
that control transaction characteristics. To specify multiple
modifiers, separate them by commas.
The WITH CONSISTENT SNAPSHOT modifier
starts a consistent
read for storage engines that are capable of it. This
applies only to InnoDB. The effect is the
same as issuing a START TRANSACTION
followed by a SELECT from any
InnoDB table. See
Section 15.7.2.3, “Consistent Nonlocking Reads”. The WITH
CONSISTENT SNAPSHOT modifier does not change the
current transaction
isolation level,
so it provides a consistent snapshot only if the current
isolation level is one that permits a consistent read. The
only isolation level that permits a consistent read is
REPEATABLE READ. For all
other isolation levels, the WITH CONSISTENT
SNAPSHOT clause is ignored. A warning is generated
when the WITH CONSISTENT SNAPSHOT clause is
ignored.
The READ WRITE and READ
ONLY modifiers set the transaction access mode. They
permit or prohibit changes to tables used in the transaction.
The READ ONLY restriction prevents the
transaction from modifying or locking both transactional and
nontransactional tables that are visible to other
transactions; the transaction can still modify or lock
temporary tables.
MySQL enables extra optimizations for queries on
InnoDB tables when the transaction is known
to be read-only. Specifying READ ONLY
ensures these optimizations are applied in cases where the
read-only status cannot be determined automatically. See
Section 8.5.3, “Optimizing InnoDB Read-Only Transactions” for more
information.
If no access mode is specified, the default mode applies.
Unless the default has been changed, it is read/write. It is
not permitted to specify both READ WRITE
and READ ONLY in the same statement.
In read-only mode, it remains possible to change tables
created with the TEMPORARY keyword using
DML statements. Changes made with DDL statements are not
permitted, just as with permanent tables.
For additional information about transaction access mode, including ways to change the default mode, see Section 13.3.7, “SET TRANSACTION Statement”.
If the read_only system
variable is enabled, explicitly starting a transaction with
START TRANSACTION READ WRITE requires the
CONNECTION_ADMIN privilege (or
the deprecated SUPER
privilege).
Many APIs used for writing MySQL client applications (such as
JDBC) provide their own methods for starting transactions that
can (and sometimes should) be used instead of sending a
START TRANSACTION statement from the client.
See Chapter 28, Connectors and APIs, or the documentation for
your API, for more information.
To disable autocommit mode explicitly, use the following statement:
SET autocommit=0;
After disabling autocommit mode by setting the
autocommit variable to zero,
changes to transaction-safe tables (such as those for
InnoDB or
NDB) are not made permanent
immediately. You must use COMMIT to
store your changes to disk or ROLLBACK to
ignore the changes.
autocommit is a session variable
and must be set for each session. To disable autocommit mode for
each new connection, see the description of the
autocommit system variable at
Section 5.1.8, “Server System Variables”.
BEGIN and BEGIN WORK are
supported as aliases of START TRANSACTION for
initiating a transaction. START TRANSACTION is
standard SQL syntax, is the recommended way to start an ad-hoc
transaction, and permits modifiers that BEGIN
does not.
The BEGIN statement differs from the use of the
BEGIN keyword that starts a
BEGIN ... END
compound statement. The latter does not begin a transaction. See
Section 13.6.1, “BEGIN ... END Compound Statement”.
Within all stored programs (stored procedures and functions,
triggers, and events), the parser treats BEGIN
[WORK] as the beginning of a
BEGIN ...
END block. Begin a transaction in this context with
START
TRANSACTION instead.
The optional WORK keyword is supported for
COMMIT and ROLLBACK, as are
the CHAIN and RELEASE
clauses. CHAIN and RELEASE
can be used for additional control over transaction completion.
The value of the completion_type
system variable determines the default completion behavior. See
Section 5.1.8, “Server System Variables”.
The AND CHAIN clause causes a new transaction
to begin as soon as the current one ends, and the new transaction
has the same isolation level as the just-terminated transaction.
The new transaction also uses the same access mode (READ
WRITE or READ ONLY) as the
just-terminated transaction. The RELEASE clause
causes the server to disconnect the current client session after
terminating the current transaction. Including the
NO keyword suppresses CHAIN
or RELEASE completion, which can be useful if
the completion_type system
variable is set to cause chaining or release completion by
default.
Beginning a transaction causes any pending transaction to be committed. See Section 13.3.3, “Statements That Cause an Implicit Commit”, for more information.
Beginning a transaction also causes table locks acquired with
LOCK TABLES to be released, as
though you had executed
UNLOCK
TABLES. Beginning a transaction does not release a
global read lock acquired with FLUSH TABLES
WITH READ LOCK.
For best results, transactions should be performed using only tables managed by a single transaction-safe storage engine. Otherwise, the following problems can occur:
If you use tables from more than one transaction-safe storage
engine (such as InnoDB), and the
transaction isolation level is not
SERIALIZABLE, it is
possible that when one transaction commits, another ongoing
transaction that uses the same tables will see only some of
the changes made by the first transaction. That is, the
atomicity of transactions is not guaranteed with mixed engines
and inconsistencies can result. (If mixed-engine transactions
are infrequent, you can use
SET
TRANSACTION ISOLATION LEVEL to set the isolation
level to SERIALIZABLE on a
per-transaction basis as necessary.)
If you use tables that are not transaction-safe within a transaction, changes to those tables are stored at once, regardless of the status of autocommit mode.
If you issue a
ROLLBACK
statement after updating a nontransactional table within a
transaction, an
ER_WARNING_NOT_COMPLETE_ROLLBACK
warning occurs. Changes to transaction-safe tables are rolled
back, but not changes to nontransaction-safe tables.
Each transaction is stored in the binary log in one chunk, upon
COMMIT. Transactions that are
rolled back are not logged.
(Exception: Modifications to
nontransactional tables cannot be rolled back. If a transaction
that is rolled back includes modifications to nontransactional
tables, the entire transaction is logged with a
ROLLBACK
statement at the end to ensure that modifications to the
nontransactional tables are replicated.) See
Section 5.4.4, “The Binary Log”.
You can change the isolation level or access mode for transactions
with the SET TRANSACTION statement.
See Section 13.3.7, “SET TRANSACTION Statement”.
Rolling back can be a slow operation that may occur implicitly
without the user having explicitly asked for it (for example, when
an error occurs). Because of this, SHOW
PROCESSLIST displays Rolling back in
the State column for the session, not only for
explicit rollbacks performed with the
ROLLBACK
statement but also for implicit rollbacks.
In MySQL 8.0, BEGIN,
COMMIT, and ROLLBACK are
not affected by --replicate-do-db
or --replicate-ignore-db rules.
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines.
You should design your transactions not to include such
statements. If you issue a statement early in a transaction that
cannot be rolled back, and then another statement later fails, the
full effect of the transaction cannot be rolled back in such cases
by issuing a
ROLLBACK
statement.
The statements listed in this section (and any synonyms for them)
implicitly end any transaction active in the current session, as
if you had done a COMMIT before
executing the statement.
Most of these statements also cause an implicit commit after executing. The intent is to handle each such statement in its own special transaction. Transaction-control and locking statements are exceptions: If an implicit commit occurs before execution, another does not occur after.
Data definition language (DDL)
statements that define or modify database objects.
ALTER EVENT,
ALTER FUNCTION,
ALTER PROCEDURE,
ALTER SERVER,
ALTER TABLE,
ALTER VIEW,
CREATE DATABASE,
CREATE EVENT,
CREATE FUNCTION,
CREATE INDEX,
CREATE PROCEDURE,
CREATE ROLE,
CREATE SERVER,
CREATE SPATIAL REFERENCE
SYSTEM, CREATE TABLE,
CREATE TRIGGER,
CREATE VIEW,
DROP DATABASE,
DROP EVENT,
DROP FUNCTION,
DROP INDEX,
DROP PROCEDURE,
DROP ROLE,
DROP SERVER,
DROP SPATIAL REFERENCE SYSTEM,
DROP TABLE,
DROP TRIGGER,
DROP VIEW,
INSTALL PLUGIN,
RENAME TABLE,
TRUNCATE TABLE,
UNINSTALL PLUGIN.
CREATE TABLE and
DROP TABLE statements do not
commit a transaction if the TEMPORARY
keyword is used. (This does not apply to other operations on
temporary tables such as ALTER
TABLE and CREATE
INDEX, which do cause a commit.) However, although
no implicit commit occurs, neither can the statement be rolled
back, which means that the use of such statements causes
transactional atomicity to be violated. For example, if you
use CREATE
TEMPORARY TABLE and then roll back the transaction,
the table remains in existence.
The CREATE TABLE statement in
InnoDB is processed as a single
transaction. This means that a
ROLLBACK
from the user does not undo CREATE
TABLE statements the user made during that
transaction.
CREATE TABLE ...
SELECT causes an implicit commit before and after
the statement is executed when you are creating nontemporary
tables. (No commit occurs for CREATE TEMPORARY TABLE
... SELECT.)
Statements that implicitly use or modify
tables in the mysql database.
ALTER USER,
CREATE USER,
DROP USER,
GRANT,
RENAME USER,
REVOKE,
SET PASSWORD.
Transaction-control and locking
statements.
BEGIN,
LOCK TABLES, SET
autocommit = 1 (if the value is not already 1),
START
TRANSACTION,
UNLOCK
TABLES.
UNLOCK
TABLES commits a transaction only if any tables
currently have been locked with LOCK
TABLES to acquire nontransactional table locks. A
commit does not occur for
UNLOCK
TABLES following FLUSH TABLES
WITH READ LOCK because the latter statement does not
acquire table-level locks.
Transactions cannot be nested. This is a consequence of the
implicit commit performed for any current transaction when you
issue a START
TRANSACTION statement or one of its synonyms.
Statements that cause an implicit commit cannot be used in an
XA transaction while the transaction is in an
ACTIVE state.
The BEGIN
statement differs from the use of the BEGIN
keyword that starts a
BEGIN ...
END compound statement. The latter does not cause an
implicit commit. See Section 13.6.1, “BEGIN ... END Compound Statement”.
Data loading statements.
LOAD DATA.
LOAD DATA causes an implicit
commit only for tables using the
NDB storage engine.
Administrative statements.
ANALYZE TABLE,
CACHE INDEX,
CHECK TABLE,
FLUSH,
LOAD INDEX INTO
CACHE, OPTIMIZE
TABLE, REPAIR TABLE,
RESET (but not
RESET PERSIST).
Replication control
statements. START
SLAVE, STOP SLAVE,
RESET SLAVE,
CHANGE MASTER TO.
SAVEPOINTidentifierROLLBACK [WORK] TO [SAVEPOINT]identifierRELEASE SAVEPOINTidentifier
InnoDB supports the SQL statements
SAVEPOINT,
ROLLBACK TO
SAVEPOINT,
RELEASE
SAVEPOINT and the optional WORK
keyword for
ROLLBACK.
The SAVEPOINT statement sets a
named transaction savepoint with a name of
identifier. If the current transaction
has a savepoint with the same name, the old savepoint is deleted
and a new one is set.
The ROLLBACK TO
SAVEPOINT statement rolls back a transaction to the
named savepoint without terminating the transaction. Modifications
that the current transaction made to rows after the savepoint was
set are undone in the rollback, but InnoDB does
not release the row locks that were stored in
memory after the savepoint. (For a new inserted row, the lock
information is carried by the transaction ID stored in the row;
the lock is not separately stored in memory. In this case, the row
lock is released in the undo.) Savepoints that were set at a later
time than the named savepoint are deleted.
If the ROLLBACK TO
SAVEPOINT statement returns the following error, it
means that no savepoint with the specified name exists:
ERROR 1305 (42000): SAVEPOINT identifier does not exist
The RELEASE
SAVEPOINT statement removes the named savepoint from the
set of savepoints of the current transaction. No commit or
rollback occurs. It is an error if the savepoint does not exist.
All savepoints of the current transaction are deleted if you
execute a COMMIT, or a
ROLLBACK that
does not name a savepoint.
A new savepoint level is created when a stored function is invoked or a trigger is activated. The savepoints on previous levels become unavailable and thus do not conflict with savepoints on the new level. When the function or trigger terminates, any savepoints it created are released and the previous savepoint level is restored.
LOCK INSTANCE FOR BACKUP UNLOCK INSTANCE
LOCK INSTANCE FOR BACKUP acquires an
instance-level backup lock that permits DML
during an online backup while preventing operations that could
result in an inconsistent snapshot.
Executing the LOCK INSTANCE FOR BACKUP
statement requires the BACKUP_ADMIN
privilege. The BACKUP_ADMIN
privilege is automatically granted to users with the
RELOAD privilege when performing an
in-place upgrade to MySQL 8.0 from an earlier
version.
Multiple sessions can hold a backup lock simultaneously.
UNLOCK INSTANCE releases a backup lock held by
the current session. A backup lock held by a session is also
released if the session is terminated.
LOCK INSTANCE FOR BACKUP prevents files from
being created, renamed, or removed. REPAIR
TABLE TRUNCATE TABLE,
OPTIMIZE TABLE, and account
management statements are blocked. See
Section 13.7.1, “Account Management Statements”. Operations that
modify InnoDB files that are not recorded in
the InnoDB redo log are also blocked.
LOCK INSTANCE FOR BACKUP permits DDL operations
that only affect user-created temporary tables. In effect, files
that belong to user-created temporary tables can be created,
renamed, or removed while a backup lock is held. Creation of
binary log files is also permitted.
A backup lock acquired by LOCK INSTANCE FOR
BACKUP is independent of transactional locks and locks
taken by
FLUSH
TABLES ,
and the following sequences of statements are permitted:
tbl_name [,
tbl_name] ... WITH READ LOCK
LOCK INSTANCE FOR BACKUP; FLUSH TABLEStbl_name[,tbl_name] ... WITH READ LOCK; UNLOCK TABLES; UNLOCK INSTANCE;
FLUSH TABLEStbl_name[,tbl_name] ... WITH READ LOCK; LOCK INSTANCE FOR BACKUP; UNLOCK INSTANCE; UNLOCK TABLES;
The lock_wait_timeout setting
defines the amount of time that a LOCK INSTANCE FOR
BACKUP statement waits to acquire a lock before giving
up.
LOCK TABLES
tbl_name [[AS] alias] lock_type
[, tbl_name [[AS] alias] lock_type] ...
lock_type: {
READ [LOCAL]
| [LOW_PRIORITY] WRITE
}
UNLOCK TABLES
MySQL enables client sessions to acquire table locks explicitly for the purpose of cooperating with other sessions for access to tables, or to prevent other sessions from modifying tables during periods when a session requires exclusive access to them. A session can acquire or release locks only for itself. One session cannot acquire locks for another session or release locks held by another session.
Locks may be used to emulate transactions or to get more speed when updating tables. This is explained in more detail in Table-Locking Restrictions and Conditions.
LOCK TABLES explicitly acquires
table locks for the current client session. Table locks can be
acquired for base tables or views. You must have the
LOCK TABLES privilege, and the
SELECT privilege for each object to
be locked.
For view locking, LOCK TABLES adds
all base tables used in the view to the set of tables to be locked
and locks them automatically. If you lock a table explicitly with
LOCK TABLES, any tables used in
triggers are also locked implicitly, as described in
LOCK TABLES and Triggers.
If you lock a table explicitly with LOCK
TABLES, any tables related by a foreign key constraint
are opened and locked implicitly. For foreign key checks, a shared
read-only lock (LOCK
TABLES READ) is taken on related tables. For cascading
updates, a shared-nothing write lock
(LOCK TABLES
WRITE) is taken on related tables that are involved in
the operation.
UNLOCK
TABLES explicitly releases any table locks held by the
current session. LOCK TABLES
implicitly releases any table locks held by the current session
before acquiring new locks.
Another use for
UNLOCK
TABLES is to release the global read lock acquired with
the FLUSH TABLES WITH READ LOCK
statement, which enables you to lock all tables in all databases.
See Section 13.7.8.3, “FLUSH Statement”. (This is a very convenient way to get
backups if you have a file system such as Veritas that can take
snapshots in time.)
A table lock protects only against inappropriate reads or writes
by other sessions. A session holding a WRITE
lock can perform table-level operations such as
DROP TABLE or
TRUNCATE TABLE. For sessions
holding a READ lock, DROP
TABLE and TRUNCATE TABLE
operations are not permitted.
The following discussion applies only to
non-TEMPORARY tables. LOCK
TABLES is permitted (but ignored) for a
TEMPORARY table. The table can be accessed
freely by the session within which it was created, regardless of
what other locking may be in effect. No lock is necessary because
no other session can see the table.
To acquire table locks within the current session, use the
LOCK TABLES statement, which
acquires metadata locks (see
Section 8.11.4, “Metadata Locking”).
The following lock types are available:
READ [LOCAL] lock:
The session that holds the lock can read the table (but not write it).
Multiple sessions can acquire a READ lock
for the table at the same time.
Other sessions can read the table without explicitly
acquiring a READ lock.
The LOCAL modifier enables nonconflicting
INSERT statements (concurrent
inserts) by other sessions to execute while the lock is
held. (See Section 8.11.3, “Concurrent Inserts”.) However,
READ LOCAL cannot be used if you are
going to manipulate the database using processes external to
the server while you hold the lock. For
InnoDB tables, READ
LOCAL is the same as READ.
[LOW_PRIORITY] WRITE lock:
The session that holds the lock can read and write the table.
Only the session that holds the lock can access the table. No other session can access it until the lock is released.
Lock requests for the table by other sessions block while
the WRITE lock is held.
The LOW_PRIORITY modifier has no effect.
In previous versions of MySQL, it affected locking behavior,
but this is no longer true. It is now deprecated and its use
produces a warning. Use WRITE without
LOW_PRIORITY instead.
WRITE locks normally have higher priority
than READ locks to ensure that updates are
processed as soon as possible. This means that if one session
obtains a READ lock and then another session
requests a WRITE lock, subsequent
READ lock requests wait until the session
that requested the WRITE lock has obtained
the lock and released it. (An exception to this policy can occur
for small values of the
max_write_lock_count system
variable; see Section 8.11.4, “Metadata Locking”.)
If the LOCK TABLES statement must
wait due to locks held by other sessions on any of the tables,
it blocks until all locks can be acquired.
A session that requires locks must acquire all the locks that it
needs in a single LOCK TABLES
statement. While the locks thus obtained are held, the session
can access only the locked tables. For example, in the following
sequence of statements, an error occurs for the attempt to
access t2 because it was not locked in the
LOCK TABLES statement:
mysql>LOCK TABLES t1 READ;mysql>SELECT COUNT(*) FROM t1;+----------+ | COUNT(*) | +----------+ | 3 | +----------+ mysql>SELECT COUNT(*) FROM t2;ERROR 1100 (HY000): Table 't2' was not locked with LOCK TABLES
Tables in the INFORMATION_SCHEMA database are
an exception. They can be accessed without being locked
explicitly even while a session holds table locks obtained with
LOCK TABLES.
You cannot refer to a locked table multiple times in a single query using the same name. Use aliases instead, and obtain a separate lock for the table and each alias:
mysql>LOCK TABLE t WRITE, t AS t1 READ;mysql>INSERT INTO t SELECT * FROM t;ERROR 1100: Table 't' was not locked with LOCK TABLES mysql>INSERT INTO t SELECT * FROM t AS t1;
The error occurs for the first
INSERT because there are two
references to the same name for a locked table. The second
INSERT succeeds because the
references to the table use different names.
If your statements refer to a table by means of an alias, you must lock the table using that same alias. It does not work to lock the table without specifying the alias:
mysql>LOCK TABLE t READ;mysql>SELECT * FROM t AS myalias;ERROR 1100: Table 'myalias' was not locked with LOCK TABLES
Conversely, if you lock a table using an alias, you must refer to it in your statements using that alias:
mysql>LOCK TABLE t AS myalias READ;mysql>SELECT * FROM t;ERROR 1100: Table 't' was not locked with LOCK TABLES mysql>SELECT * FROM t AS myalias;
When the table locks held by a session are released, they are all released at the same time. A session can release its locks explicitly, or locks may be released implicitly under certain conditions.
A session can release its locks explicitly with
UNLOCK
TABLES.
If a session issues a LOCK
TABLES statement to acquire a lock while already
holding locks, its existing locks are released implicitly
before the new locks are granted.
If a session begins a transaction (for example, with
START
TRANSACTION), an implicit
UNLOCK
TABLES is performed, which causes existing locks
to be released. (For additional information about the
interaction between table locking and transactions, see
Interaction of Table Locking and Transactions.)
If the connection for a client session terminates, whether normally or abnormally, the server implicitly releases all table locks held by the session (transactional and nontransactional). If the client reconnects, the locks will no longer be in effect. In addition, if the client had an active transaction, the server rolls back the transaction upon disconnect, and if reconnect occurs, the new session begins with autocommit enabled. For this reason, clients may wish to disable auto-reconnect. With auto-reconnect in effect, the client is not notified if reconnect occurs but any table locks or current transaction will have been lost. With auto-reconnect disabled, if the connection drops, an error occurs for the next statement issued. The client can detect the error and take appropriate action such as reacquiring the locks or redoing the transaction. See Section 28.7.27, “C API Automatic Reconnection Control”.
If you use ALTER TABLE on a
locked table, it may become unlocked. For example, if you
attempt a second ALTER TABLE
operation, the result may be an error Table
'. To handle this, lock the table again prior
to the second alteration. See also
Section B.4.6.1, “Problems with ALTER TABLE”.
tbl_name' was not locked with LOCK
TABLES
LOCK TABLES and
UNLOCK
TABLES interact with the use of transactions as
follows:
LOCK TABLES is not
transaction-safe and implicitly commits any active
transaction before attempting to lock the tables.
UNLOCK
TABLES implicitly commits any active transaction,
but only if LOCK TABLES has
been used to acquire table locks. For example, in the
following set of statements,
UNLOCK
TABLES releases the global read lock but does not
commit the transaction because no table locks are in effect:
FLUSH TABLES WITH READ LOCK; START TRANSACTION; SELECT ... ; UNLOCK TABLES;
Beginning a transaction (for example, with
START
TRANSACTION) implicitly commits any current
transaction and releases existing table locks.
FLUSH TABLES WITH READ LOCK
acquires a global read lock and not table locks, so it is
not subject to the same behavior as
LOCK TABLES and
UNLOCK
TABLES with respect to table locking and implicit
commits. For example,
START
TRANSACTION does not release the global read lock.
See Section 13.7.8.3, “FLUSH Statement”.
Other statements that implicitly cause transactions to be committed do not release existing table locks. For a list of such statements, see Section 13.3.3, “Statements That Cause an Implicit Commit”.
The correct way to use LOCK
TABLES and
UNLOCK
TABLES with transactional tables, such as
InnoDB tables, is to begin a transaction
with SET autocommit = 0 (not
START
TRANSACTION) followed by LOCK
TABLES, and to not call
UNLOCK
TABLES until you commit the transaction
explicitly. For example, if you need to write to table
t1 and read from table
t2, you can do this:
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;
When you call LOCK TABLES,
InnoDB internally takes its own table
lock, and MySQL takes its own table lock.
InnoDB releases its internal table lock
at the next commit, but for MySQL to release its table lock,
you have to call
UNLOCK
TABLES. You should not have
autocommit = 1, because
then InnoDB releases its internal table
lock immediately after the call of LOCK
TABLES, and deadlocks can very easily happen.
InnoDB does not acquire the internal
table lock at all if autocommit =
1, to help old applications avoid unnecessary
deadlocks.
ROLLBACK
does not release table locks.
If you lock a table explicitly with LOCK
TABLES, any tables used in triggers are also locked
implicitly:
The locks are taken as the same time as those acquired
explicitly with the LOCK
TABLES statement.
The lock on a table used in a trigger depends on whether the table is used only for reading. If so, a read lock suffices. Otherwise, a write lock is used.
If a table is locked explicitly for reading with
LOCK TABLES, but needs to be
locked for writing because it might be modified within a
trigger, a write lock is taken rather than a read lock.
(That is, an implicit write lock needed due to the table's
appearance within a trigger causes an explicit read lock
request for the table to be converted to a write lock
request.)
Suppose that you lock two tables, t1 and
t2, using this statement:
LOCK TABLES t1 WRITE, t2 READ;
If t1 or t2 have any
triggers, tables used within the triggers will also be locked.
Suppose that t1 has a trigger defined like
this:
CREATE TRIGGER t1_a_ins AFTER INSERT ON t1 FOR EACH ROW
BEGIN
UPDATE t4 SET count = count+1
WHERE id = NEW.id AND EXISTS (SELECT a FROM t3);
INSERT INTO t2 VALUES(1, 2);
END;
The result of the LOCK TABLES
statement is that t1 and
t2 are locked because they appear in the
statement, and t3 and t4
are locked because they are used within the trigger:
t1 is locked for writing per the
WRITE lock request.
t2 is locked for writing, even though the
request is for a READ lock. This occurs
because t2 is inserted into within the
trigger, so the READ request is converted
to a WRITE request.
t3 is locked for reading because it is
only read from within the trigger.
t4 is locked for writing because it might
be updated within the trigger.
You can safely use KILL to
terminate a session that is waiting for a table lock. See
Section 13.7.8.4, “KILL Statement”.
LOCK TABLES and
UNLOCK
TABLES cannot be used within stored programs.
Tables in the performance_schema database
cannot be locked with LOCK
TABLES, except the
setup_ tables.
xxx
The following statements are prohibited while a
LOCK TABLES statement is in
effect: CREATE TABLE,
CREATE TABLE ...
LIKE, CREATE VIEW,
DROP VIEW, and DDL statements on
stored functions and procedures and events.
For some operations, system tables in the
mysql database must be accessed. For example,
the HELP statement requires the
contents of the server-side help tables, and
CONVERT_TZ() might need to read
the time zone tables. The server implicitly locks the system
tables for reading as necessary so that you need not lock them
explicitly. These tables are treated as just described:
mysql.help_category mysql.help_keyword mysql.help_relation mysql.help_topic mysql.time_zone mysql.time_zone_leap_second mysql.time_zone_name mysql.time_zone_transition mysql.time_zone_transition_type
If you want to explicitly place a WRITE lock
on any of those tables with a LOCK
TABLES statement, the table must be the only one
locked; no other table can be locked with the same statement.
Normally, you do not need to lock tables, because all single
UPDATE statements are atomic; no
other session can interfere with any other currently executing
SQL statement. However, there are a few cases when locking
tables may provide an advantage:
If you are going to run many operations on a set of
MyISAM tables, it is much faster to lock
the tables you are going to use. Locking
MyISAM tables speeds up inserting,
updating, or deleting on them because MySQL does not flush
the key cache for the locked tables until
UNLOCK
TABLES is called. Normally, the key cache is
flushed after each SQL statement.
The downside to locking the tables is that no session can
update a READ-locked table (including the
one holding the lock) and no session can access a
WRITE-locked table other than the one
holding the lock.
If you are using tables for a nontransactional storage
engine, you must use LOCK
TABLES if you want to ensure that no other session
modifies the tables between a
SELECT and an
UPDATE. The example shown
here requires LOCK TABLES to
execute safely:
LOCK TABLES trans READ, customer WRITE; SELECT SUM(value) FROM trans WHERE customer_id=some_id; UPDATE customer SET total_value=sum_from_previous_statementWHERE customer_id=some_id; UNLOCK TABLES;
Without LOCK TABLES, it is
possible that another session might insert a new row in the
trans table between execution of the
SELECT and
UPDATE statements.
You can avoid using LOCK TABLES
in many cases by using relative updates (UPDATE
customer SET
)
or the value=value+new_valueLAST_INSERT_ID() function.
You can also avoid locking tables in some cases by using the
user-level advisory lock functions
GET_LOCK() and
RELEASE_LOCK(). These locks are
saved in a hash table in the server and implemented with
pthread_mutex_lock() and
pthread_mutex_unlock() for high speed. See
Section 12.14, “Locking Functions”.
See Section 8.11.1, “Internal Locking Methods”, for more information on locking policy.
SET [GLOBAL | SESSION] TRANSACTION
transaction_characteristic [, transaction_characteristic] ...
transaction_characteristic: {
ISOLATION LEVEL level
| access_mode
}
level: {
REPEATABLE READ
| READ COMMITTED
| READ UNCOMMITTED
| SERIALIZABLE
}
access_mode: {
READ WRITE
| READ ONLY
}
This statement specifies
transaction
characteristics. It takes a list of one or more characteristic
values separated by commas. Each characteristic value sets the
transaction isolation
level or access mode. The isolation level is used for
operations on InnoDB tables. The
access mode specifies whether transactions operate in read/write
or read-only mode.
In addition, SET TRANSACTION can
include an optional GLOBAL or
SESSION keyword to indicate the scope of the
statement.
To set the transaction isolation level, use an
ISOLATION LEVEL
clause. It is not
permitted to specify multiple levelISOLATION LEVEL
clauses in the same SET
TRANSACTION statement.
The default isolation level is
REPEATABLE READ. Other
permitted values are READ
COMMITTED, READ
UNCOMMITTED, and
SERIALIZABLE. For information
about these isolation levels, see
Section 15.7.2.1, “Transaction Isolation Levels”.
To set the transaction access mode, use a READ
WRITE or READ ONLY clause. It is
not permitted to specify multiple access-mode clauses in the
same SET TRANSACTION statement.
By default, a transaction takes place in read/write mode, with
both reads and writes permitted to tables used in the
transaction. This mode may be specified explicitly using
SET TRANSACTION with an access
mode of READ WRITE.
If the transaction access mode is set to READ
ONLY, changes to tables are prohibited. This may
enable storage engines to make performance improvements that are
possible when writes are not permitted.
In read-only mode, it remains possible to change tables created
with the TEMPORARY keyword using DML
statements. Changes made with DDL statements are not permitted,
just as with permanent tables.
The READ WRITE and READ
ONLY access modes also may be specified for an
individual transaction using the
START
TRANSACTION statement.
You can set transaction characteristics globally, for the current session, or for the next transaction only:
With the GLOBAL keyword:
The statement applies globally for all subsequent sessions.
Existing sessions are unaffected.
With the SESSION keyword:
The statement applies to all subsequent transactions performed within the current session.
The statement is permitted within transactions, but does not affect the current ongoing transaction.
If executed between transactions, the statement overrides any preceding statement that sets the next-transaction value of the named characteristics.
Without any SESSION or
GLOBAL keyword:
The statement applies only to the next single transaction performed within the session.
Subsequent transactions revert to using the session value of the named characteristics.
The statement is not permitted within transactions:
mysql>START TRANSACTION;Query OK, 0 rows affected (0.02 sec) mysql>SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;ERROR 1568 (25001): Transaction characteristics can't be changed while a transaction is in progress
A change to global transaction characteristics requires the
CONNECTION_ADMIN privilege (or
the deprecated SUPER privilege).
Any session is free to change its session characteristics (even
in the middle of a transaction), or the characteristics for its
next transaction (prior to the start of that transaction).
To set the global isolation level at server startup, use the
--transaction-isolation=
option on the command line or in an option file. Values of
levellevel for this option use dashes
rather than spaces, so the permissible values are
READ-UNCOMMITTED,
READ-COMMITTED,
REPEATABLE-READ, or
SERIALIZABLE.
Similarly, to set the global transaction access mode at server
startup, use the
--transaction-read-only option.
The default is OFF (read/write mode) but the
value can be set to ON for a mode of read
only.
For example, to set the isolation level to
REPEATABLE READ and the
access mode to READ WRITE, use these lines in
the [mysqld] section of an option file:
[mysqld] transaction-isolation = REPEATABLE-READ transaction-read-only = OFF
At runtime, characteristics at the global, session, and
next-transaction scope levels can be set indirectly using the
SET TRANSACTION statement, as
described previously. They can also be set directly using the
SET
statement to assign values to the
transaction_isolation and
transaction_read_only system
variables:
SET TRANSACTION permits
optional GLOBAL and
SESSION keywords for setting transaction
characteristics at different scope levels.
The
SET
statement for assigning values to the
transaction_isolation and
transaction_read_only
system variables has syntaxes for setting these variables at
different scope levels.
The following tables show the characteristic scope level set by
each SET TRANSACTION and
variable-assignment syntax.
Table 13.9 SET TRANSACTION Syntax for Transaction Characteristics
| Syntax | Affected Characteristic Scope |
|---|---|
SET GLOBAL TRANSACTION
|
Global |
SET SESSION TRANSACTION
|
Session |
SET TRANSACTION
|
Next transaction only |
Table 13.10 SET Syntax for Transaction Characteristics
| Syntax | Affected Characteristic Scope |
|---|---|
SET GLOBAL |
Global |
SET @@GLOBAL. |
Global |
SET PERSIST |
Global |
SET @@PERSIST. |
Global |
SET PERSIST_ONLY |
No runtime effect |
SET @@PERSIST_ONLY. |
No runtime effect |
SET SESSION |
Session |
SET @@SESSION. |
Session |
SET |
Session |
SET @@ |
Next transaction only |
It is possible to check the global and session values of transaction characteristics at runtime:
SELECT @@GLOBAL.transaction_isolation, @@GLOBAL.transaction_read_only; SELECT @@SESSION.transaction_isolation, @@SESSION.transaction_read_only;
Support for XA transactions is
available for the InnoDB storage
engine. The MySQL XA implementation is based on the X/Open CAE
document Distributed Transaction Processing: The XA
Specification. This document is published by The Open
Group and available at
http://www.opengroup.org/public/pubs/catalog/c193.htm.
Limitations of the current XA implementation are described in
Section 13.3.8.3, “Restrictions on XA Transactions”.
On the client side, there are no special requirements. The XA
interface to a MySQL server consists of SQL statements that begin
with the XA keyword. MySQL client programs must
be able to send SQL statements and to understand the semantics of
the XA statement interface. They do not need be linked against a
recent client library. Older client libraries also will work.
Among the MySQL Connectors, MySQL Connector/J 5.0.0 and higher supports XA directly, by means of a class interface that handles the XA SQL statement interface for you.
XA supports distributed transactions, that is, the ability to permit multiple separate transactional resources to participate in a global transaction. Transactional resources often are RDBMSs but may be other kinds of resources.
A global transaction involves several actions that are
transactional in themselves, but that all must either complete
successfully as a group, or all be rolled back as a group. In
essence, this extends ACID properties “up a level” so
that multiple ACID transactions can be executed in concert as
components of a global operation that also has ACID properties.
(As with nondistributed transactions,
SERIALIZABLE may be preferred
if your applications are sensitive to read phenomena.
REPEATABLE READ may not be
sufficient for distributed transactions.)
Some examples of distributed transactions:
An application may act as an integration tool that combines a messaging service with an RDBMS. The application makes sure that transactions dealing with message sending, retrieval, and processing that also involve a transactional database all happen in a global transaction. You can think of this as “transactional email.”
An application performs actions that involve different database servers, such as a MySQL server and an Oracle server (or multiple MySQL servers), where actions that involve multiple servers must happen as part of a global transaction, rather than as separate transactions local to each server.
A bank keeps account information in an RDBMS and distributes and receives money through automated teller machines (ATMs). It is necessary to ensure that ATM actions are correctly reflected in the accounts, but this cannot be done with the RDBMS alone. A global transaction manager integrates the ATM and database resources to ensure overall consistency of financial transactions.
Applications that use global transactions involve one or more Resource Managers and a Transaction Manager:
A Resource Manager (RM) provides access to transactional resources. A database server is one kind of resource manager. It must be possible to either commit or roll back transactions managed by the RM.
A Transaction Manager (TM) coordinates the transactions that are part of a global transaction. It communicates with the RMs that handle each of these transactions. The individual transactions within a global transaction are “branches” of the global transaction. Global transactions and their branches are identified by a naming scheme described later.
The MySQL implementation of XA enables a MySQL server to act as a Resource Manager that handles XA transactions within a global transaction. A client program that connects to the MySQL server acts as the Transaction Manager.
To carry out a global transaction, it is necessary to know which components are involved, and bring each component to a point when it can be committed or rolled back. Depending on what each component reports about its ability to succeed, they must all commit or roll back as an atomic group. That is, either all components must commit, or all components must roll back. To manage a global transaction, it is necessary to take into account that any component or the connecting network might fail.
The process for executing a global transaction uses two-phase commit (2PC). This takes place after the actions performed by the branches of the global transaction have been executed.
In the first phase, all branches are prepared. That is, they are told by the TM to get ready to commit. Typically, this means each RM that manages a branch records the actions for the branch in stable storage. The branches indicate whether they are able to do this, and these results are used for the second phase.
In the second phase, the TM tells the RMs whether to commit or roll back. If all branches indicated when they were prepared that they will be able to commit, all branches are told to commit. If any branch indicated when it was prepared that it will not be able to commit, all branches are told to roll back.
In some cases, a global transaction might use one-phase commit (1PC). For example, when a Transaction Manager finds that a global transaction consists of only one transactional resource (that is, a single branch), that resource can be told to prepare and commit at the same time.
To perform XA transactions in MySQL, use the following statements:
XA {START|BEGIN} xid [JOIN|RESUME]
XA END xid [SUSPEND [FOR MIGRATE]]
XA PREPARE xid
XA COMMIT xid [ONE PHASE]
XA ROLLBACK xid
XA RECOVER [CONVERT XID]
For XA
START, the JOIN and
RESUME clauses are recognized but have no
effect.
For XA
END the SUSPEND [FOR MIGRATE]
clause is recognized but has no effect.
Each XA statement begins with the XA keyword,
and most of them require an xid
value. An xid is an XA transaction
identifier. It indicates which transaction the statement applies
to. xid values are supplied by the
client, or generated by the MySQL server. An
xid value has from one to three
parts:
xid:gtrid[,bqual[,formatID]]
gtrid is a global transaction
identifier, bqual is a branch
qualifier, and formatID is a number
that identifies the format used by the
gtrid and
bqual values. As indicated by the
syntax, bqual and
formatID are optional. The default
bqual value is ''
if not given. The default formatID
value is 1 if not given.
gtrid and
bqual must be string literals, each
up to 64 bytes (not characters) long.
gtrid and
bqual can be specified in several
ways. You can use a quoted string ('ab'), hex
string (X'6162', 0x6162),
or bit value
(b').
nnnn'
formatID is an unsigned integer.
The gtrid and
bqual values are interpreted in bytes
by the MySQL server's underlying XA support routines. However,
while an SQL statement containing an XA statement is being
parsed, the server works with some specific character set. To be
safe, write gtrid and
bqual as hex strings.
xid values typically are generated by
the Transaction Manager. Values generated by one TM must be
different from values generated by other TMs. A given TM must be
able to recognize its own xid values
in a list of values returned by the
XA
RECOVER statement.
XA START
starts an XA
transaction with the given xidxid value.
Each XA transaction must have a unique
xid value, so the value must not
currently be used by another XA transaction. Uniqueness is
assessed using the gtrid and
bqual values. All following XA
statements for the XA transaction must be specified using the
same xid value as that given in the
XA
START statement. If you use any of those statements
but specify an xid value that does
not correspond to some existing XA transaction, an error occurs.
One or more XA transactions can be part of the same global
transaction. All XA transactions within a given global
transaction must use the same gtrid
value in the xid value. For this
reason, gtrid values must be globally
unique so that there is no ambiguity about which global
transaction a given XA transaction is part of. The
bqual part of the
xid value must be different for each
XA transaction within a global transaction. (The requirement
that bqual values be different is a
limitation of the current MySQL XA implementation. It is not
part of the XA specification.)
The XA
RECOVER statement returns information for those XA
transactions on the MySQL server that are in the
PREPARED state. (See
Section 13.3.8.2, “XA Transaction States”.) The output includes a row for each
such XA transaction on the server, regardless of which client
started it.
XA
RECOVER requires the
XA_RECOVER_ADMIN privilege. This
privilege requirement prevents users from discovering the XID
values for outstanding prepared XA transactions other than their
own. It does not affect normal commit or rollback of an XA
transaction because the user who started it knows its XID.
XA
RECOVER output rows look like this (for an example
xid value consisting of the parts
'abc', 'def', and
7):
mysql> XA RECOVER;
+----------+--------------+--------------+--------+
| formatID | gtrid_length | bqual_length | data |
+----------+--------------+--------------+--------+
| 7 | 3 | 3 | abcdef |
+----------+--------------+--------------+--------+
The output columns have the following meanings:
formatID is the
formatID part of the transaction
xid
gtrid_length is the length in bytes of
the gtrid part of the
xid
bqual_length is the length in bytes of
the bqual part of the
xid
data is the concatenation of the
gtrid and
bqual parts of the
xid
XID values may contain nonprintable characters.
XA
RECOVER permits an optional CONVERT
XID clause so that clients can request XID values in
hexadecimal.
An XA transaction progresses through the following states:
Use XA
START to start an XA transaction and put it in the
ACTIVE state.
For an ACTIVE XA transaction, issue the
SQL statements that make up the transaction, and then issue
an XA
END statement.
XA
END puts the transaction in the
IDLE state.
For an IDLE XA transaction, you can issue
either an XA
PREPARE statement or an XA COMMIT ... ONE
PHASE statement:
XA
PREPARE puts the transaction in the
PREPARED state. An
XA
RECOVER statement at this point will include
the transaction's xid value
in its output, because
XA
RECOVER lists all XA transactions that are in
the PREPARED state.
XA COMMIT ... ONE PHASE prepares and
commits the transaction. The
xid value will not be listed
by XA
RECOVER because the transaction terminates.
For a PREPARED XA transaction, you can
issue an XA
COMMIT statement to commit and terminate the
transaction, or
XA
ROLLBACK to roll back and terminate the
transaction.
Here is a simple XA transaction that inserts a row into a table as part of a global transaction:
mysql>XA START 'xatest';Query OK, 0 rows affected (0.00 sec) mysql>INSERT INTO mytable (i) VALUES(10);Query OK, 1 row affected (0.04 sec) mysql>XA END 'xatest';Query OK, 0 rows affected (0.00 sec) mysql>XA PREPARE 'xatest';Query OK, 0 rows affected (0.00 sec) mysql>XA COMMIT 'xatest';Query OK, 0 rows affected (0.00 sec)
Within the context of a given client connection, XA transactions
and local (non-XA) transactions are mutually exclusive. For
example, if XA
START has been issued to begin an XA transaction, a
local transaction cannot be started until the XA transaction has
been committed or rolled back. Conversely, if a local
transaction has been started with
START
TRANSACTION, no XA statements can be used until the
transaction has been committed or rolled back.
If an XA transaction is in the ACTIVE state,
you cannot issue any statements that cause an implicit commit.
That would violate the XA contract because you could not roll
back the XA transaction. You will receive the following error if
you try to execute such a statement:
ERROR 1399 (XAE07): XAER_RMFAIL: The command cannot be executed when global transaction is in the ACTIVE state
Statements to which the preceding remark applies are listed at Section 13.3.3, “Statements That Cause an Implicit Commit”.
XA transaction support is limited to the
InnoDB storage engine.
For “external XA,” a MySQL server acts as a
Resource Manager and client programs act as Transaction
Managers. For “Internal XA”, storage engines within
a MySQL server act as RMs, and the server itself acts as a TM.
Internal XA support is limited by the capabilities of individual
storage engines. Internal XA is required for handling XA
transactions that involve more than one storage engine. The
implementation of internal XA requires that a storage engine
support two-phase commit at the table handler level, and
currently this is true only for InnoDB.
For XA
START, the JOIN and
RESUME clauses are recognized but have no
effect.
For XA
END the SUSPEND [FOR MIGRATE]
clause is recognized but has no effect.
The requirement that the bqual part
of the xid value be different for
each XA transaction within a global transaction is a limitation
of the current MySQL XA implementation. It is not part of the XA
specification.
An XA transaction is written to the binary log in two parts.
When XA PREPARE is issued, the first part of
the transaction up to XA PREPARE is written
using an initial GTID. A XA_prepare_log_event
is used to identify such transactions in the binary log. When
XA COMMIT or XA ROLLBACK
is issued, a second part of the transaction containing only the
XA COMMIT or XA ROLLBACK
statement is written using a second GTID. Note that the initial
part of the transaction, identified by
XA_prepare_log_event, is not necessarily
followed by its XA COMMIT or XA
ROLLBACK, which can cause interleaved binary logging
of any two XA transactions. The two parts of the XA transaction
can even appear in different binary log files. This means that
an XA transaction in PREPARED state is now
persistent until an explicit XA COMMIT or
XA ROLLBACK statement is issued, ensuring
that XA transactions are compatible with replication.
On a replication slave, immediately after the XA transaction is
prepared, it is detached from the slave applier thread, and can
be committed or rolled back by any thread on the slave. This
means that the same XA transaction can appear in the
events_transactions_current table
with different states on different threads. The
events_transactions_current table
displays the current status of the most recent monitored
transaction event on the thread, and does not update this status
when the thread is idle. So the XA transaction can still be
displayed in the PREPARED state for the
original applier thread, after it has been processed by another
thread. To positively identify XA transactions that are still in
the PREPARED state and need to be recovered,
use the XA
RECOVER statement rather than the Performance Schema
transaction tables.
The following restrictions exist for using XA transactions:
XA transactions are not fully resilient to an unexpected
halt with respect to the binary log. If there is an
unexpected halt while the server is in the middle of
executing an XA PREPARE, XA
COMMIT, XA ROLLBACK, or
XA COMMIT ... ONE PHASE statement, the
server might not be able to recover to a correct state,
leaving the server and the binary log in an inconsistent
state. In this situation, the binary log might either
contain extra XA transactions that are not applied, or miss
XA transactions that are applied. Also, if GTIDs are
enabled, after recovery
@@GLOBAL.GTID_EXECUTED might not
correctly describe the transactions that have been applied.
Note that if an unexpected halt occurs before XA
PREPARE, between XA PREPARE and
XA COMMIT (or XA
ROLLBACK), or after XA COMMIT
(or XA ROLLBACK), the server and binary
log are correctly recovered and taken to a consistent state.
The use of replication filters or binary log filters in
combination with XA transactions is not supported. Filtering
of tables could cause an XA transaction to be empty on a
replication slave, and empty XA transactions are not
supported. Also, with the settings
master_info_repository=TABLE
and
relay_log_info_repository=TABLE
on a replication slave, which became the defaults in MySQL
8.0, the internal state of the data engine transaction is
changed following a filtered XA transaction, and can become
inconsistent with the replication transaction context state.
The error
ER_XA_REPLICATION_FILTERS is
logged whenever an XA transaction is impacted by a
replication filter, whether or not the transaction was empty
as a result. If the transaction is not empty, the
replication slave is able to continue running, but you
should take steps to discontinue the use of replication
filters with XA transactions in order to avoid potential
issues. If the transaction is empty, the replication slave
stops. In that event, the replication slave might be in an
undetermined state in which the consistency of the
replication process might be compromised. In particular, the
gtid_executed set on a slave of the slave
might be inconsistent with that on the master. To resolve
this situation, isolate the master and stop all replication,
then check GTID consistency across the replication topology.
Undo the XA transaction that generated the error message,
then restart replication.
XA transactions are considered unsafe for statement-based
replication. If two XA transactions committed in parallel on
the master are being prepared on the slave in the inverse
order, locking dependencies can occur that cannot be safely
resolved, and it is possible for replication to fail with
deadlock on the slave. This situation can occur for a
single-threaded or multithreaded replication slave. When
binlog_format=STATEMENT is
set, a warning is issued for DML statements inside XA
transactions. When
binlog_format=MIXED or
binlog_format=ROW is set,
DML statements inside XA transactions are logged using
row-based replication, and the potential issue is not
present.
Prior to MySQL 5.7.7, XA transactions were not compatible with
replication at all. This was because an XA transaction that
was in PREPARED state would be rolled back
on clean server shutdown or client disconnect. Similarly, an
XA transaction that was in PREPARED state
would still exist in PREPARED state in case
the server was shutdown abnormally and then started again, but
the contents of the transaction could not be written to the
binary log. In both of these situations the XA transaction
could not be replicated correctly.
Replication can be controlled through the SQL interface using the statements described in this section. Statements are split into a group which controls master servers, a group which controls slave servers, and a group which can be applied to any replication servers.
This section discusses statements for managing master replication servers. Section 13.4.2, “SQL Statements for Controlling Slave Servers”, discusses statements for managing slave servers.
In addition to the statements described here, the following
SHOW statements are used with
master servers in replication. For information about these
statements, see Section 13.7.7, “SHOW Statements”.
PURGE { BINARY | MASTER } LOGS {
TO 'log_name'
| BEFORE datetime_expr
}
The binary log is a set of files that contain information about data modifications made by the MySQL server. The log consists of a set of binary log files, plus an index file (see Section 5.4.4, “The Binary Log”).
The PURGE BINARY LOGS statement
deletes all the binary log files listed in the log index file
prior to the specified log file name or date.
BINARY and MASTER are
synonyms. Deleted log files also are removed from the list
recorded in the index file, so that the given log file becomes
the first in the list.
PURGE BINARY LOGS requires the
BINLOG_ADMIN privilege. This
statement has no effect if the server was not started with the
--log-bin option to enable binary
logging.
Examples:
PURGE BINARY LOGS TO 'mysql-bin.010'; PURGE BINARY LOGS BEFORE '2019-04-02 22:46:26';
The BEFORE variant's
datetime_expr argument should
evaluate to a DATETIME value (a
value in ' format).
YYYY-MM-DD
hh:mm:ss'
This statement is safe to run while slaves are replicating. You need not stop them. If you have an active slave that currently is reading one of the log files you are trying to delete, this statement does not delete the log file that is in use or any log files later than that one, but it deletes any earlier log files. A warning message is issued in this situation. However, if a slave is not connected and you happen to purge one of the log files it has yet to read, the slave will be unable to replicate after it reconnects.
To safely purge binary log files, follow this procedure:
On each slave server, use SHOW SLAVE
STATUS to check which log file it is reading.
Obtain a listing of the binary log files on the master
server with SHOW BINARY LOGS.
Determine the earliest log file among all the slaves. This is the target file. If all the slaves are up to date, this is the last log file on the list.
Make a backup of all the log files you are about to delete. (This step is optional, but always advisable.)
Purge all log files up to but not including the target file.
PURGE BINARY LOGS TO and PURGE
BINARY LOGS BEFORE both fail with an error when binary
log files listed in the .index file had
been removed from the system by some other means (such as using
rm on Linux). (Bug #18199, Bug #18453) To
handle such errors, edit the .index file
(which is a simple text file) manually to ensure that it lists
only the binary log files that are actually present, then run
again the PURGE BINARY LOGS
statement that failed.
Binary log files are automatically removed after the server's
binary log expiration period. Removal of the files can take
place at startup and when the binary log is flushed. The default
binary log expiration period is 30 days. You can specify an
alternative expiration period using the
binlog_expire_logs_seconds
system variable. If you are using replication, you should
specify an expiration period that is no lower than the maximum
amount of time your slaves might lag behind the master.
RESET MASTER [TO binary_log_file_index_number]
Use this statement with caution to ensure you do not lose any wanted binary log file data and GTID execution history.
RESET MASTER requires the
RELOAD privilege.
For a server where binary logging is enabled
(log_bin is
ON), RESET MASTER deletes
all existing binary log files and resets the binary log index
file, resetting the server to its state before binary logging
was started. A new empty binary log file is created so that
binary logging can be restarted.
For a server where GTIDs are in use
(gtid_mode is
ON), issuing RESET MASTER
resets the GTID execution history. The value of the
gtid_purged system variable is
set to an empty string (''), the global value
(but not the session value) of the
gtid_executed system variable
is set to an empty string, and the
mysql.gtid_executed table is cleared (see
mysql.gtid_executed Table). If the
GTID-enabled server has binary logging enabled,
RESET MASTER also resets the
binary log as described above. Note that
RESET MASTER is the method to
reset the GTID execution history even if the GTID-enabled server
is a replication slave where binary logging is disabled;
RESET SLAVE has no effect on the
GTID execution history. For more information on resetting the
GTID execution history, see
Resetting the GTID Execution History.
Issuing RESET MASTER without the optional
TO clause deletes all binary log files listed
in the index file, resets the binary log index file to be empty,
and creates a new binary log file starting at
1. Use the optional TO
clause to start the binary log file index from a number other
than 1 after the reset.
Using RESET MASTER with the
TO clause to specify a binary log file index
number to start from simplifies failover by providing a single
statement alternative to the FLUSH BINARY
LOGS and
PURGE BINARY
LOGS TO statements. Check that you are using a
reasonable value for the index number. If you enter an incorrect
value, you can correct this by issuing another
RESET MASTER statement with or
without the TO clause. If you do not correct
a value that is out of range, the server cannot be restarted.
The following example demonstrates TO clause
usage:
RESET MASTER TO 1234; SHOW BINARY LOGS; +-------------------+-----------+-----------+ | Log_name | File_size | Encrypted | +-------------------+-----------+-----------+ | master-bin.001234 | 154 | No | +-------------------+-----------+-----------+
The effects of RESET MASTER
without the TO clause differ from those of
PURGE BINARY LOGS in 2 key
ways:
RESET MASTER removes
all binary log files that are listed
in the index file, leaving only a single, empty binary log
file with a numeric suffix of .000001,
whereas the numbering is not reset by
PURGE BINARY LOGS.
RESET MASTER is
not intended to be used while any
replication slaves are running. The behavior of
RESET MASTER when used
while slaves are running is undefined (and thus
unsupported), whereas PURGE BINARY
LOGS may be safely used while replication slaves
are running.
RESET MASTER without the
TO clause can prove useful when you first set
up a master and slave, so that you can verify the setup as
follows:
Start the master and slave, and start replication (see Section 17.1.2, “Setting Up Binary Log File Position Based Replication”).
Execute a few test queries on the master.
Check that the queries were replicated to the slave.
When replication is running correctly, issue
STOP SLAVE followed by
RESET SLAVE on the slave,
then verify that no unwanted data from the test queries
exists on the slave.
Issue RESET MASTER on the
master to clean up the test queries.
After verifying the setup, resetting the master and slave and ensuring that no unwanted data or binary log files generated by testing remain on the master or slave, you can start the slave and begin replicating.
SET sql_log_bin = {OFF|ON}
The sql_log_bin variable
controls whether logging to the binary log is enabled for the
current session (assuming that the binary log itself is
enabled). The default value is ON. To disable
or enable binary logging for the current session, set the
session sql_log_bin variable to
OFF or ON.
Set this variable to OFF for a session to
temporarily disable binary logging while making changes to the
master you do not want replicated to the slave.
Setting the session value of this system variable is a restricted operation. The session user must have privileges sufficient to set restricted session variables. See Section 5.1.9.1, “System Variable Privileges”.
It is not possible to set the session value of
sql_log_bin within a
transaction or subquery.
Setting this variable to OFF
prevents new GTIDs from being assigned to transactions in the
binary log. If you are using GTIDs for replication,
this means that even when binary logging is later enabled again,
the GTIDs written into the log from this point do not account
for any transactions that occurred in the meantime, so in effect
those transactions are lost.
mysqldump adds a SET
@@SESSION.sql_log_bin=0 statement to a dump file from
a server where GTIDs are in use, which disables binary logging
while the dump file is being reloaded. The statement prevents
new GTIDs from being generated and assigned to the transactions
in the dump file as they are executed, so that the original
GTIDs for the transactions are used.
This section discusses statements for managing slave replication servers. Section 13.4.1, “SQL Statements for Controlling Master Servers”, discusses statements for managing master servers.
In addition to the statements described here,
SHOW SLAVE STATUS and
SHOW RELAYLOG EVENTS are also used
with replication slaves. For information about these statements,
see Section 13.7.7.34, “SHOW SLAVE STATUS Statement”, and
Section 13.7.7.32, “SHOW RELAYLOG EVENTS Statement”.
CHANGE MASTER TOoption[,option] ... [channel_option]option: { MASTER_BIND = 'interface_name' | MASTER_HOST = 'host_name' | MASTER_USER = 'user_name' | MASTER_PASSWORD = 'password' | MASTER_PORT =port_num| PRIVILEGE_CHECKS_USER = {'account' | NULL} | REQUIRE_ROW_FORMAT = {0|1} | REQUIRE_TABLE_PRIMARY_KEY_CHECK = {STREAM | ON | OFF} | MASTER_CONNECT_RETRY =interval| MASTER_RETRY_COUNT =count| MASTER_DELAY =interval| MASTER_HEARTBEAT_PERIOD =interval| MASTER_LOG_FILE = 'master_log_name' | MASTER_LOG_POS =master_log_pos| MASTER_AUTO_POSITION = {0|1} | RELAY_LOG_FILE = 'relay_log_name' | RELAY_LOG_POS =relay_log_pos| MASTER_COMPRESSION_ALGORITHMS = 'value' | MASTER_ZSTD_COMPRESSION_LEVEL =level| MASTER_SSL = {0|1} | MASTER_SSL_CA = 'ca_file_name' | MASTER_SSL_CAPATH = 'ca_directory_name' | MASTER_SSL_CERT = 'cert_file_name' | MASTER_SSL_CRL = 'crl_file_name' | MASTER_SSL_CRLPATH = 'crl_directory_name' | MASTER_SSL_KEY = 'key_file_name' | MASTER_SSL_CIPHER = 'cipher_list' | MASTER_SSL_VERIFY_SERVER_CERT = {0|1} | MASTER_TLS_VERSION = 'protocol_list' | MASTER_TLS_CIPHERSUITES = 'ciphersuite_list' | MASTER_PUBLIC_KEY_PATH = 'key_file_name' | GET_MASTER_PUBLIC_KEY = {0|1} | IGNORE_SERVER_IDS = (server_id_list) }channel_option: FOR CHANNELchannelserver_id_list: [server_id[,server_id] ... ]
CHANGE MASTER TO changes the
parameters that the slave server uses for connecting to the
master server, for reading the master binary log, and reading
the slave relay log. It also updates the contents of the master
info and relay log info repositories (see
Section 17.2.4, “Replication Relay and Status Logs”). CHANGE
MASTER TO requires the
REPLICATION_SLAVE_ADMIN privilege
(or the deprecated SUPER
privilege).
You can issue CHANGE MASTER TO statements on
a running slave without first stopping it, depending on the
states of the slave SQL thread and slave I/O thread. The rules
governing such use are provided later in this section.
When using a multithreaded slave (in other words
slave_parallel_workers is
greater than 0), stopping the slave can cause
“gaps” in the sequence of transactions that have
been executed from the relay log, regardless of whether the
slave was stopped intentionally or otherwise. When such gaps
exist, issuing CHANGE MASTER TO
fails. The solution in this situation is to issue
START SLAVE UNTIL
SQL_AFTER_MTS_GAPS which ensures that the gaps are
closed.
The optional FOR CHANNEL
clause enables you
to name which replication channel the statement applies to.
Providing a channelFOR CHANNEL
clause applies the
channelCHANGE MASTER TO statement to a specific
replication channel, and is used to add a new channel or modify
an existing channel. For example, to add a new channel called
channel2:
CHANGE MASTER TO MASTER_HOST=host1, MASTER_PORT=3002 FOR CHANNEL 'channel2'
If no clause is named and no extra channels exist, the statement applies to the default channel.
When using multiple replication channels, if a CHANGE
MASTER TO statement does not name a channel using a
FOR CHANNEL
clause, an error
occurs. See Section 17.2.3, “Replication Channels” for more
information.
channel
Options not specified retain their value, except as indicated in the following discussion. Thus, in most cases, there is no need to specify options that do not change.
MASTER_HOST, MASTER_USER,
MASTER_PASSWORD, and
MASTER_PORT provide information to the slave
about how to connect to its master:
MASTER_HOST and
MASTER_PORT are the host name (or IP
address) of the master host and its TCP/IP port.
Replication cannot use Unix socket files. You must be able to connect to the master MySQL server using TCP/IP.
If you specify the MASTER_HOST or
MASTER_PORT option, the slave assumes
that the master server is different from before (even if the
option value is the same as its current value.) In this
case, the old values for the master binary log file name and
position are considered no longer applicable, so if you do
not specify MASTER_LOG_FILE and
MASTER_LOG_POS in the statement,
MASTER_LOG_FILE='' and
MASTER_LOG_POS=4 are silently appended to
it.
Setting MASTER_HOST='' (that is, setting
its value explicitly to an empty string) is
not the same as not setting
MASTER_HOST at all. Trying to set
MASTER_HOST to an empty string fails with
an error.
Values used for MASTER_HOST and other
CHANGE MASTER TO options are checked for
linefeed (\n or 0x0A)
characters; the presence of such characters in these values
causes the statement to fail with
ER_MASTER_INFO. (Bug
#11758581, Bug #50801)
MASTER_USER and
MASTER_PASSWORD are the user name and
password of the account to use for connecting to the master.
If you specify MASTER_PASSWORD,
MASTER_USER is also required. The
password used for a MySQL Replication slave account in a
CHANGE MASTER TO statement is limited to
32 characters in length; trying to use a password of more
than 32 characters causes CHANGE MASTER
TO to fail.
It is possible to set an empty user name by specifying
MASTER_USER='', but the replication
channel cannot be started with an empty user name. In
releases before MySQL 8.0.21, only set an empty
MASTER_USER user name if you need to
clear previously used credentials from the replication
slave's repositories for security purposes. Do not use the
channel afterwards, due to a bug in these releases that can
substitute a default user name if an empty user name is read
from the repositories (for example, during an automatic
restart of a Group Replication channel). From MySQL 8.0.21,
it is valid to set an empty MASTER_USER
user name if you always provide user credentials using the
START SLAVE statement or
START GROUP_REPLICATION
statement that starts the replication channel. This approach
means that the replication channel always needs operator
intervention to restart, but the user credentials are not
recorded in the replication slave's repositories.
The text of a running CHANGE MASTER
TO statement, including values for
MASTER_USER and
MASTER_PASSWORD, can be seen in the
output of a concurrent SHOW
PROCESSLIST statement. (The complete text of a
START SLAVE statement is also
visible to SHOW PROCESSLIST.)
REQUIRE_ROW_FORMAT (available as of MySQL
8.0.19) permits only row-based replication events to be
processed by the replication channel. This option prevents the
replication applier from taking actions such as creating
temporary tables and executing LOAD DATA
INFILE requests, which increases the security of the
channel. Group Replication channels are automatically created
with REQUIRE_ROW_FORMAT set, and you cannot
change the option for those channels. For more information, see
Section 17.3.3, “Replication Privilege Checks”.
PRIVILEGE_CHECKS_USER (available as of MySQL
8.0.18) names a user account that supplies a security context
for the specified channel. NULL, which is the
default, means no security context is used. The use of row-based
binary logging is strongly recommended when
PRIVILEGE_CHECKS_USER is set, and you can set
REQUIRE_ROW_FORMAT to enforce this. For
example, to start privilege checks on the channel
channel_1 on a running replication slave,
issue the following statements:
mysql> STOP SLAVE FOR CHANNEL 'channel_1';
mysql> CHANGE MASTER TO
PRIVILEGE_CHECKS_USER = 'priv_repl'@'%.example.com',
REQUIRE_ROW_FORMAT = 1,
FOR CHANNEL 'channel_1';
mysql> START SLAVE FOR CHANNEL 'channel_1';
The user name and host name for the user account must follow the
syntax described in Section 6.2.4, “Specifying Account Names”, and the
user must not be an anonymous user (with a blank user name) or
the CURRENT_USER. The account must have the
REPLICATION_APPLIER privilege,
plus the required privileges to execute the transactions
replicated on the channel. For details of the privileges
required by the account, see
Section 17.3.3, “Replication Privilege Checks”. When you restart
the replication channel, the privilege checks are applied from
that point on. If you do not specify a channel and no other
channels exist, the statement is applied to the default channel.
REQUIRE_TABLE_PRIMARY_KEY_CHECK (available as
of MySQL 8.0.20) enables a replication slave to select its own
policy for primary key checks. When the option is set to
ON for a replication channel, the slave
always uses the value ON for the
sql_require_primary_key system
variable in replication operations, requiring a primary key.
When the option is set to OFF, the slave
always uses the value OFF for the
sql_require_primary_key system
variable in replication operations, so that a primary key is
never required, even if the master required one. When the
REQUIRE_TABLE_PRIMARY_KEY_CHECK option is set
to STREAM, which is the default, the slave
uses whatever value is replicated from the master for each
transaction.
For multisource replication, setting
REQUIRE_TABLE_PRIMARY_KEY_CHECK to
ON or OFF enables a
slave to normalize behavior across the replication channels
for different masters, and keep a consistent setting for the
sql_require_primary_key
system variable. Using ON safeguards
against the accidental loss of primary keys when multiple
masters update the same set of tables. Using
OFF allows masters that can manipulate
primary keys to work alongside masters that cannot.
When PRIVILEGE_CHECKS_USER is set,
setting REQUIRE_TABLE_PRIMARY_KEY_CHECK
to ON or OFF means
that the user account does not need session administration
level privileges to set restricted session variables, which
are required to change the value of
sql_require_primary_key to
match the master's setting for each transaction. For more
information, see
Section 17.3.3, “Replication Privilege Checks”.
MASTER_COMPRESSION_ALGORITHMS and
MASTER_ZSTD_COMPRESSION_LEVEL (available as
of MySQL 8.0.18) enable control over the use of compression for
connections to the master:
MASTER_COMPRESSION_ALGORITHMS specifies
the permitted compression algorithms. The available
algorithms are the same as for the
protocol_compression_algorithms
system variable. The default value is
uncompressed.
The value of
MASTER_COMPRESSION_ALGORITHMS applies
only if the
slave_compressed_protocol
system variable is disabled. If
slave_compressed_protocol
is enabled, it takes precedence over
MASTER_COMPRESSION_ALGORITHMS and
connections to the master use zlib
compression if both master and slave support that algorithm.
MASTER_ZSTD_COMPRESSION_LEVEL is the
compression level to use for connections that use the
zstd compression algorithm. The permitted
levels are from 1 to 22, with larger values indicating
increasing levels of compression. The default
zstd compression level is 3. The
compression level setting has no effect on connections that
do not use zstd compression.
For more information, see Section 4.2.7, “Connection Compression Control”.
Binary log transaction compression (available as of MySQL
8.0.20), which is activated by the
binlog_transaction_compression
system variable, can also be used to save bandwidth. If you do
this in combination with connection compression, connection
compression has less opportunity to act on the data, but can
still compress headers and those events and transaction payloads
that are uncompressed. For more information on binary log
transaction compression, see
Section 5.4.4.5, “Binary Log Transaction Compression”.
The MASTER_SSL_
options and the
xxxMASTER_TLS_
options specify how the slave uses encryption and ciphers to
secure the replication connection. These options can be changed
even on slaves that are compiled without SSL support. They are
saved to the master info repository, but are ignored if the
slave does not have SSL support enabled. The
xxxMASTER_SSL_ and
xxxMASTER_TLS_
options perform the same functions as the
xxx--ssl- and
xxx--tls- client
options described in
Command Options for Encrypted Connections. The
correspondence between the two sets of options, and the use of
the xxxMASTER_SSL_
and xxxMASTER_TLS_
options to set up a secure connection, is explained in
Section 17.3.1, “Setting Up Replication to Use Encrypted Connections”.
xxx
To connect to the replication master using a user account that
authenticates with the
caching_sha2_password plugin, you must
either set up a secure connection as described in
Section 17.3.1, “Setting Up Replication to Use Encrypted Connections”,
or enable the unencrypted connection to support password
exchange using an RSA key pair. The
caching_sha2_password authentication plugin
is the default for new users created from MySQL 8.0 (for
details, see
Section 6.4.1.2, “Caching SHA-2 Pluggable Authentication”). If
the user account that you create or use for replication (as
specified by the MASTER_USER option) uses
this authentication plugin, and you are not using a secure
connection, you must enable RSA key pair-based password
exchange for a successful connection.
To enable RSA key pair-based password exchange, specify either
the MASTER_PUBLIC_KEY_PATH or the
GET_MASTER_PUBLIC_KEY=1 option. Either of
these options provides the RSA public key to the slave:
MASTER_PUBLIC_KEY_PATH indicates the path
name to a file containing a slave-side copy of the public
key required by the master for RSA key pair-based password
exchange. The file must be in PEM format. This option
applies to slaves that authenticate with the
sha256_password or
caching_sha2_password authentication
plugin. (For sha256_password,
MASTER_PUBLIC_KEY_PATH can be used only
if MySQL was built using OpenSSL.)
GET_MASTER_PUBLIC_KEY indicates whether
to request from the master the public key required for RSA
key pair-based password exchange. This option applies to
slaves that authenticate with the
caching_sha2_password authentication
plugin. For connections by accounts that authenticate using
this plugin, the master does not send the public key unless
requested, so it must be requested or specified in the
client. If MASTER_PUBLIC_KEY_PATH is
given and specifies a valid public key file, it takes
precedence over GET_MASTER_PUBLIC_KEY.
The MASTER_HEARTBEAT_PERIOD,
MASTER_CONNECT_RETRY, and
MASTER_RETRY_COUNT options control how the
slave recognizes that the connection to the master has been lost
and makes attempts to reconnect.
The slave_net_timeout
system variable specifies the number of seconds that the
slave waits for either more data or a heartbeat signal from
the master, before the slave considers the connection
broken, aborts the read, and tries to reconnect. The default
value is 60 seconds (one minute).
The heartbeat interval, which stops the connection timeout
occurring in the absence of data if the connection is still
good, is controlled by the
MASTER_HEARTBEAT_PERIOD option. A
heartbeat signal is sent to the slave after that number of
seconds, and the waiting period is reset whenever the
master's binary log is updated with an event. Heartbeats are
therefore sent by the master only if there are no unsent
events in the binary log file for a period longer than this.
The heartbeat interval interval
is a decimal value having the range 0 to 4294967 seconds and
a resolution in milliseconds; the smallest nonzero value is
0.001. Setting interval to 0
disables heartbeats altogether. The heartbeat interval
defaults to half the value of the
slave_net_timeout system
variable. It is recorded in the master info log and shown in
the
replication_connection_configuration
Performance Schema table. Issuing RESET
SLAVE resets the heartbeat interval to the default
value.
Note that a change to the value or default setting of
slave_net_timeout does not
automatically change the heartbeat interval, whether that
has been set explicitly or is using a previously calculated
default. A warning is issued if you set
@@GLOBAL.slave_net_timeout to a value
less than that of the current heartbeat interval. If
slave_net_timeout is
changed, you must also issue CHANGE
MASTER TO to adjust the heartbeat interval to an
appropriate value so that the heartbeat signal occurs before
the connection timeout. If you do not do this, the heartbeat
signal has no effect, and if no data is received from the
master, the slave can make repeated reconnection attempts,
creating zombie dump threads.
If the slave does need to reconnect, the first retry occurs
immediately after the timeout.
MASTER_CONNECT_RETRY specifies the
interval between reconnection attempts, and
MASTER_RETRY_COUNT limits the number of
reconnection attempts. If both the default settings are
used, the slave waits 60 seconds between reconnection
attempts (MASTER_CONNECT_RETRY=60), and
keeps attempting to reconnect at this rate for 60 days
(MASTER_RETRY_COUNT=86400). These values
are recorded in the master info log and shown in the
replication_connection_configuration
Performance Schema table.
MASTER_RETRY_COUNT supersedes the
--master-retry-count server
startup option.
MASTER_DELAY specifies how many seconds
behind the master the slave must lag. An event received from the
master is not executed until at least
interval seconds later than its
execution on the master. The default is 0. An error occurs if
interval is not a nonnegative integer
in the range from 0 to 231−1.
For more information, see Section 17.4.10, “Delayed Replication”.
A CHANGE MASTER TO statement employing the
MASTER_DELAY option can be executed on a
running slave when the slave SQL thread is stopped.
MASTER_BIND is for use on replication slaves
having multiple network interfaces, and determines which of the
slave's network interfaces is chosen for connecting to the
master.
The address configured with this option, if any, can be seen in
the Master_Bind column of the output from
SHOW SLAVE STATUS. In the master
info repository table
mysql.slave_master_info, the value can be
seen as the Master_bind column.
The ability to bind a replication slave to a specific network interface is also supported by NDB Cluster.
MASTER_LOG_FILE and
MASTER_LOG_POS are the coordinates at which
the slave I/O thread should begin reading from the master the
next time the thread starts. RELAY_LOG_FILE
and RELAY_LOG_POS are the coordinates at
which the slave SQL thread should begin reading from the relay
log the next time the thread starts. If you specify either of
MASTER_LOG_FILE or
MASTER_LOG_POS, you cannot specify
RELAY_LOG_FILE or
RELAY_LOG_POS. If you specify either of
MASTER_LOG_FILE or
MASTER_LOG_POS, you also cannot specify
MASTER_AUTO_POSITION = 1 (described later in
this section). If neither of MASTER_LOG_FILE
or MASTER_LOG_POS is specified, the slave
uses the last coordinates of the slave SQL
thread before CHANGE MASTER
TO was issued. This ensures that there is no
discontinuity in replication, even if the slave SQL thread was
late compared to the slave I/O thread, when you merely want to
change, say, the password to use.
RELAY_LOG_FILE can use either an absolute or
relative path, and uses the same base name as
MASTER_LOG_FILE. A CHANGE MASTER
TO statement employing
RELAY_LOG_FILE,
RELAY_LOG_POS, or both options can be
executed on a running slave when the slave SQL thread is
stopped. Relay logs are preserved if at least one of the slave
SQL thread and the slave I/O thread is running. If both threads
are stopped, all relay log files are deleted unless at least one
of RELAY_LOG_FILE or
RELAY_LOG_POS is specified. Note that the
Group Replication applier channel
(group_replication_applier) has no I/O
thread, only a SQL thread. For this channel, the relay logs are
not preserved when the SQL thread is stopped.
When MASTER_AUTO_POSITION = 1 is used with
CHANGE MASTER TO, the slave attempts to
connect to the master using the GTID-based replication protocol.
This option can be used with CHANGE MASTER TO
only if both the slave SQL and slave I/O threads are stopped.
Both the slave and the master must have GTIDs enabled
(GTID_MODE=ON,
ON_PERMISSIVE, or
OFF_PERMISSIVE on the slave, and
GTID_MODE=ON on the master).
Auto-positioning is used for the connection, so the coordinates
represented by MASTER_LOG_FILE and
MASTER_LOG_POS are not used, and the use of
either or both of these options together with
MASTER_AUTO_POSITION = 1 causes an error. If
multi-source replication is enabled on the slave, you need to
set the MASTER_AUTO_POSITION = 1 option for
each applicable replication channel.
With MASTER_AUTO_POSITION = 1 set, in the
initial connection handshake, the slave sends a GTID set
containing the transactions that it has already received,
committed, or both. The master responds by sending all
transactions recorded in its binary log whose GTID is not
included in the GTID set sent by the slave. This exchange
ensures that the master only sends the transactions with a GTID
that the slave has not already recorded or committed. If the
slave receives transactions from more than one master, as in the
case of a diamond topology, the auto-skip function ensures that
the transactions are not applied twice. For details of how the
GTID set sent by the slave is computed, see
Section 17.1.3.3, “GTID Auto-Positioning”.
If any of the transactions that should be sent by the master
have been purged from the master's binary log, or added to the
set of GTIDs in the gtid_purged
system variable by another method, the master sends the error
ER_MASTER_HAS_PURGED_REQUIRED_GTIDS to
the slave, and replication does not start. The GTIDs of the
missing purged transactions are identified and listed in the
master's error log in the warning message
ER_FOUND_MISSING_GTIDS. Also, if during
the exchange of transactions it is found that the slave has
recorded or committed transactions with the master's UUID in the
GTID, but the master itself has not committed them, the master
sends the error
ER_SLAVE_HAS_MORE_GTIDS_THAN_MASTER to
the slave and replication does not start. For information on how
to handle these situations, see
Section 17.1.3.3, “GTID Auto-Positioning”.
You can see whether replication is running with auto-positioning
enabled by checking the Performance Schema
replication_connection_status table
or the output of SHOW SLAVE
STATUS. Disabling the
MASTER_AUTO_POSITION option again makes the
slave revert to file-based replication, in which case you must
also specify one or both of the
MASTER_LOG_FILE or
MASTER_LOG_POS options.
IGNORE_SERVER_IDS takes a comma-separated
list of 0 or more server IDs. Events originating from the
corresponding servers are ignored, with the exception of log
rotation and deletion events, which are still recorded in the
relay log.
In circular replication, the originating server normally acts as
the terminator of its own events, so that they are not applied
more than once. Thus, this option is useful in circular
replication when one of the servers in the circle is removed.
Suppose that you have a circular replication setup with 4
servers, having server IDs 1, 2, 3, and 4, and server 3 fails.
When bridging the gap by starting replication from server 2 to
server 4, you can include IGNORE_SERVER_IDS =
(3) in the CHANGE MASTER
TO statement that you issue on server 4 to tell it to
use server 2 as its master instead of server 3. Doing so causes
it to ignore and not to propagate any statements that originated
with the server that is no longer in use.
If IGNORE_SERVER_IDS contains the
server's own ID and the server was started with the
--replicate-same-server-id option
enabled, an error results.
When global transaction identifiers (GTIDs) are used for
replication, transactions that have already been applied are
automatically ignored, so the
IGNORE_SERVER_IDS function is not required
and is deprecated. If
gtid_mode=ON is set for the
server, a deprecation warning is issued if you include the
IGNORE_SERVER_IDS option in a
CHANGE MASTER TO statement.
The master info repository and the output of
SHOW SLAVE STATUS provide the
list of servers that are currently ignored. For more
information, see Section 17.2.4.2, “Slave Status Logs”, and
Section 13.7.7.34, “SHOW SLAVE STATUS Statement”.
If a CHANGE MASTER TO statement
is issued without any IGNORE_SERVER_IDS
option, any existing list is preserved. To clear the list of
ignored servers, it is necessary to use the option with an empty
list:
CHANGE MASTER TO IGNORE_SERVER_IDS = ();
RESET SLAVE ALL clears
IGNORE_SERVER_IDS.
A deprecation warning is issued if SET
GTID_MODE=ON is issued when any channel has existing
server IDs set with IGNORE_SERVER_IDS.
Before starting GTID-based replication, check for and clear
all ignored server ID lists on the servers involved. The
SHOW_SLAVE_STATUS statement
displays the list of ignored IDs, if there is one. If you do
receive the deprecation warning, you can still clear a list
after gtid_mode=ON is set by
issuing a CHANGE MASTER TO
statement containing the IGNORE_SERVER_IDS
option with an empty list.
Invoking CHANGE MASTER TO causes
the previous values for MASTER_HOST,
MASTER_PORT,
MASTER_LOG_FILE, and
MASTER_LOG_POS to be written to the error
log, along with other information about the slave's state
prior to execution.
CHANGE MASTER TO causes an implicit commit of
an ongoing transaction. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
From MySQL 5.7, the strict requirement to execute
STOP SLAVE prior to issuing any
CHANGE MASTER TO statement (and
START SLAVE afterward) is
removed. Instead of depending on whether the slave is stopped,
the behavior of CHANGE MASTER TO depends on
the states of the slave SQL thread and slave I/O threads; which
of these threads is stopped or running now determines the
options that can or cannot be used with a CHANGE MASTER
TO statement at a given point in time. The rules for
making this determination are listed here:
If the SQL thread is stopped, you can execute
CHANGE MASTER TO using any combination
that is otherwise allowed of
RELAY_LOG_FILE,
RELAY_LOG_POS, and
MASTER_DELAY options, even if the slave
I/O thread is running. No other options may be used with
this statement when the I/O thread is running.
If the I/O thread is stopped, you can execute
CHANGE MASTER TO using any of the options
for this statement (in any allowed combination)
except
RELAY_LOG_FILE,
RELAY_LOG_POS,
MASTER_DELAY, or
MASTER_AUTO_POSITION = 1 even when the
SQL thread is running.
Both the SQL thread and the I/O thread must be stopped
before issuing a CHANGE MASTER TO
statement that employs MASTER_AUTO_POSITION =
1.
You can check the current state of the slave SQL and I/O threads
using SHOW SLAVE STATUS. Note
that the Group Replication applier channel
(group_replication_applier) has no I/O
thread, only a SQL thread.
For more information, see Section 17.4.8, “Switching Masters During Failover”.
If you are using statement-based replication and temporary
tables, it is possible for a CHANGE MASTER TO
statement following a STOP SLAVE statement to
leave behind temporary tables on the slave. A warning
(ER_WARN_OPEN_TEMP_TABLES_MUST_BE_ZERO)
is now issued whenever this occurs. You can avoid this in such
cases by making sure that the value of the
Slave_open_temp_tables system
status variable is equal to 0 prior to executing such a
CHANGE MASTER TO statement.
CHANGE MASTER TO is useful for
setting up a slave when you have the snapshot of the master and
have recorded the master binary log coordinates corresponding to
the time of the snapshot. After loading the snapshot into the
slave to synchronize it with the master, you can run
CHANGE MASTER TO
MASTER_LOG_FILE=' on
the slave to specify the coordinates at which the slave should
begin reading the master binary log.
log_name',
MASTER_LOG_POS=log_pos
The following example changes the master server the slave uses and establishes the master binary log coordinates from which the slave begins reading. This is used when you want to set up the slave to replicate the master:
CHANGE MASTER TO
MASTER_HOST='master2.example.com',
MASTER_USER='replication',
MASTER_PASSWORD='password',
MASTER_PORT=3306,
MASTER_LOG_FILE='master2-bin.001',
MASTER_LOG_POS=4,
MASTER_CONNECT_RETRY=10;
The next example shows an operation that is less frequently
employed. It is used when the slave has relay log files that you
want it to execute again for some reason. To do this, the master
need not be reachable. You need only use
CHANGE MASTER TO and start the
SQL thread (START SLAVE SQL_THREAD):
CHANGE MASTER TO RELAY_LOG_FILE='slave-relay-bin.006', RELAY_LOG_POS=4025;
The following table shows the maximum permissible length for the string-valued options.
| Option | Maximum Length |
|---|---|
MASTER_HOST |
255 (60 prior to MySQL 8.0.17) |
MASTER_USER |
96 |
MASTER_PASSWORD |
32 |
MASTER_LOG_FILE |
511 |
RELAY_LOG_FILE |
511 |
MASTER_SSL_CA |
511 |
MASTER_SSL_CAPATH |
511 |
MASTER_SSL_CERT |
511 |
MASTER_SSL_CRL |
511 |
MASTER_SSL_CRLPATH |
511 |
MASTER_SSL_KEY |
511 |
MASTER_SSL_CIPHER |
511 |
MASTER_TLS_VERSION |
511 |
MASTER_TLS_CIPHERSUITES |
4000 |
MASTER_PUBLIC_KEY_PATH |
511 |
MASTER_COMPRESSION_ALGORITHMS |
99 |
CHANGE REPLICATION FILTERfilter[,filter] [, ...] [FOR CHANNELchannel]filter: { REPLICATE_DO_DB = (db_list) | REPLICATE_IGNORE_DB = (db_list) | REPLICATE_DO_TABLE = (tbl_list) | REPLICATE_IGNORE_TABLE = (tbl_list) | REPLICATE_WILD_DO_TABLE = (wild_tbl_list) | REPLICATE_WILD_IGNORE_TABLE = (wild_tbl_list) | REPLICATE_REWRITE_DB = (db_pair_list) }db_list:db_name[,db_name][, ...]tbl_list:db_name.table_name[,db_name.table_name][, ...]wild_tbl_list: 'db_pattern.table_pattern'[, 'db_pattern.table_pattern'][, ...]db_pair_list: (db_pair)[, (db_pair)][, ...]db_pair:from_db,to_db
CHANGE REPLICATION FILTER sets one or more
replication filtering rules on the slave in the same way as
starting the slave mysqld with replication
filtering options such as
--replicate-do-db or
--replicate-wild-ignore-table.
Unlike the case with the server options, this statement does not
require restarting the server to take effect, only that the
slave SQL thread be stopped using
STOP SLAVE
SQL_THREAD first (and restarted with
START SLAVE
SQL_THREAD afterwards). CHANGE
REPLICATION FILTER requires the
REPLICATION_SLAVE_ADMIN privilege
(or the deprecated SUPER
privilege). Use the FOR CHANNEL
clause to make a
replication filter specific to a replication channel, for
example on a multi-source replication slave. Filters applied
without a specific channelFOR CHANNEL clause are
considered global filters, meaning that they are applied to all
replication channels.
Global replication filters cannot be set on a MySQL server
instance that is configured for Group Replication, because
filtering transactions on some servers would make the group
unable to reach agreement on a consistent state. Channel
specific replication filters can be set on replication
channels that are not directly involved with Group
Replication, such as where a group member also acts as a
replication slave to a master that is outside the group.
They cannot be set on the
group_replication_applier or
group_replication_recovery channels.
The following list shows the CHANGE REPLICATION
FILTER options and how they relate to
--replicate-* server options:
REPLICATE_DO_DB: Include updates based on
database name. Equivalent to
--replicate-do-db.
REPLICATE_IGNORE_DB: Exclude updates
based on database name. Equivalent to
--replicate-ignore-db.
REPLICATE_DO_TABLE: Include updates based
on table name. Equivalent to
--replicate-do-table.
REPLICATE_IGNORE_TABLE: Exclude updates
based on table name. Equivalent to
--replicate-ignore-table.
REPLICATE_WILD_DO_TABLE: Include updates
based on wildcard pattern matching table name. Equivalent to
--replicate-wild-do-table.
REPLICATE_WILD_IGNORE_TABLE: Exclude
updates based on wildcard pattern matching table name.
Equivalent to
--replicate-wild-ignore-table.
REPLICATE_REWRITE_DB: Perform updates on
slave after substituting new name on slave for specified
database on master. Equivalent to
--replicate-rewrite-db.
The precise effects of REPLICATE_DO_DB and
REPLICATE_IGNORE_DB filters are dependent on
whether statement-based or row-based replication is in effect.
See Section 17.2.5, “How Servers Evaluate Replication Filtering Rules”, for more information.
Multiple replication filtering rules can be created in a single
CHANGE REPLICATION FILTER statement by
separating the rules with commas, as shown here:
CHANGE REPLICATION FILTER
REPLICATE_DO_DB = (d1), REPLICATE_IGNORE_DB = (d2);
Issuing the statement just shown is equivalent to starting the
slave mysqld with the options
--replicate-do-db=d1
--replicate-ignore-db=d2.
On a multi-source replication slave, which uses multiple
replication channels to process transaction from different
sources, use the FOR CHANNEL
clause to set a
replication filter on a replication channel:
channel
CHANGE REPLICATION FILTER REPLICATE_DO_DB = (d1) FOR CHANNEL channel_1;
This enables you to create a channel specific replication filter
to filter out selected data from a source. When a FOR
CHANNEL clause is provided, the replication filter
statement acts on that slave replication channel removing any
existing replication filter which has the same filter type as
the specified replication filters, and replacing them with the
specified filter. Filter types not explicitly listed in the
statement are not modified. If issued against a slave
replication channel which is not configured, the statement fails
with an ER_SLAVE_CONFIGURATION error. If
issued against Group Replication channels, the statement fails
with an
ER_SLAVE_CHANNEL_OPERATION_NOT_ALLOWED
error.
On a replication slave with multiple replication channels
configured, issuing CHANGE REPLICATION
FILTER with no FOR CHANNEL clause
configures the replication filter for every configured slave
replication channel, and for the global replication filters. For
every filter type, if the filter type is listed in the
statement, then any existing filter rules of that type are
replaced by the filter rules specified in the most recently
issued statement, otherwise the old value of the filter type is
retained. For more information see
Section 17.2.5.4, “Replication Channel Based Filters”.
If the same filtering rule is specified multiple times, only the
last such rule is actually used. For
example, the two statements shown here have exactly the same
effect, because the first REPLICATE_DO_DB
rule in the first statement is ignored:
CHANGE REPLICATION FILTER
REPLICATE_DO_DB = (db1, db2), REPLICATE_DO_DB = (db3, db4);
CHANGE REPLICATION FILTER
REPLICATE_DO_DB = (db3, db4);
This behavior differs from that of the
--replicate-* filter options where specifying
the same option multiple times causes the creation of multiple
filter rules.
Names of tables and database not containing any special
characters need not be quoted. Values used with
REPLICATION_WILD_TABLE and
REPLICATION_WILD_IGNORE_TABLE are string
expressions, possibly containing (special) wildcard characters,
and so must be quoted. This is shown in the following example
statements:
CHANGE REPLICATION FILTER
REPLICATE_WILD_DO_TABLE = ('db1.old%');
CHANGE REPLICATION FILTER
REPLICATE_WILD_IGNORE_TABLE = ('db1.new%', 'db2.new%');
Values used with REPLICATE_REWRITE_DB
represent pairs of database names; each
such value must be enclosed in parentheses. The following
statement rewrites statements occurring on database
db1 on the master to database
db2 on the slave:
CHANGE REPLICATION FILTER REPLICATE_REWRITE_DB = ((db1, db2));
The statement just shown contains two sets of parentheses, one
enclosing the pair of database names, and the other enclosing
the entire list. This is perhaps more easily seen in the
following example, which creates two
rewrite-db rules, one rewriting database
dbA to dbB, and one
rewriting database dbC to
dbD:
CHANGE REPLICATION FILTER REPLICATE_REWRITE_DB = ((dbA, dbB), (dbC, dbD));
The CHANGE REPLICATION FILTER
statement replaces replication filtering rules only for the
filter types and replication channels affected by the statement,
and leaves other rules and channels unchanged. If you want to
unset all filters of a given type, set the filter's value
to an explicitly empty list, as shown in this example, which
removes all existing REPLICATE_DO_DB and
REPLICATE_IGNORE_DB rules:
CHANGE REPLICATION FILTER
REPLICATE_DO_DB = (), REPLICATE_IGNORE_DB = ();
Setting a filter to empty in this way removes all existing
rules, does not create any new ones, and does not restore any
rules set at mysqld startup using --replicate-*
options on the command line or in the configuration file.
The RESET SLAVE
ALL statement removes channel specific replication
filters that were set on channels deleted by the statement. When
the deleted channel or channels are recreated, any global
replication filters specified for the slave are copied to them,
and no channel specific replication filters are applied.
For more information, see Section 17.2.5, “How Servers Evaluate Replication Filtering Rules”.
SELECT MASTER_POS_WAIT('master_log_file', master_log_pos [, timeout][, channel])
This is actually a function, not a statement. It is used to ensure that the slave has read and executed events up to a given position in the master's binary log. See Section 12.24, “Miscellaneous Functions”, for a full description.
RESET SLAVE [ALL] [channel_option]channel_option: FOR CHANNELchannel
RESET SLAVE makes the slave
forget its replication position in the master's binary log. This
statement is meant to be used for a clean start: It clears the
master info and relay log info repositories, deletes all the
relay log files, and starts a new relay log file. It also resets
to 0 the replication delay specified with the
MASTER_DELAY option to CHANGE MASTER
TO.
All relay log files are deleted, even if they have not been
completely executed by the slave SQL thread. (This is a
condition likely to exist on a replication slave if you have
issued a STOP SLAVE statement
or if the slave is highly loaded.)
For a server where GTIDs are in use
(gtid_mode is
ON), issuing RESET
SLAVE has no effect on the GTID execution history. The
statement does not change the values of
gtid_executed or
gtid_purged, or the
mysql.gtid_executed table. If you need to
reset the GTID execution history, use RESET
MASTER, even if the GTID-enabled server is a
replication slave where binary logging is disabled.
RESET SLAVE requires the
RELOAD privilege.
To use RESET SLAVE, the slave
replication threads must be stopped, so on a running slave use
STOP SLAVE before issuing
RESET SLAVE. To use
RESET SLAVE on a Group
Replication group member, the member status must be
OFFLINE, meaning that the plugin is loaded
but the member does not currently belong to any group. A group
member can be taken offline by using a STOP
GROUP REPLICATION statement.
The optional FOR CHANNEL
clause enables you
to name which replication channel the statement applies to.
Providing a channelFOR CHANNEL
clause applies the
channelRESET SLAVE statement to a specific
replication channel. Combining a FOR CHANNEL
clause with the
channelALL option deletes the specified channel. If
no channel is named and no extra channels exist, the statement
applies to the default channel. Issuing a
RESET SLAVE
ALL statement without a FOR CHANNEL
clause when
multiple replication channels exist deletes
all replication channels and recreates only
the default channel. See Section 17.2.3, “Replication Channels”
for more information.
channel
RESET SLAVE does not change any
replication connection parameters, which include the master
host, master port, master user, master password,
PRIVILEGE_CHECKS_USER account,
REQUIRE_ROW_FORMAT option, and
REQUIRE_TABLE_PRIMARY_KEY_CHECK option. If
you want to change any of the replication connection parameters,
you can do this using a CHANGE MASTER
TO statement after the server start. If you want to
remove all of the replication connection parameters, use
RESET SLAVE
ALL.
RESET SLAVE
ALL also clears the
IGNORE_SERVER_IDS list set by CHANGE MASTER
TO. When you have used
RESET SLAVE
ALL, if you want to use the instance as a replication
slave again, you need to issue a CHANGE
MASTER TO statement after the server start to specify
new connection parameters.
In the event of a server crash or deliberate restart after
issuing RESET SLAVE but before
issuing START SLAVE, retention of
the replication connection parameters depends on the repository
used for the master info log and relay log info log:
When
master_info_repository=TABLE
and
relay_log_info_repository=TABLE
are set on the server (which are the default settings from
MySQL 8.0), replication connection parameters are preserved
in the crash-safe InnoDB tables
mysql.slave_master_info and
mysql.slave_relay_log_info as part of the
RESET SLAVE operation. They
are also retained in memory. In the event of a server crash
or deliberate restart after issuing
RESET SLAVE but before
issuing START SLAVE, the
replication connection parameters are retrieved from the
tables and reapplied to the channel. This situation applies
from MySQL 8.0.13 for the master info log, and from MySQL
8.0.19 for the relay log info log.
If
master_info_repository=FILE
and
relay_log_info_repository=FILE
are set on the server, or the MySQL Server release is
earlier than those specified above, replication connection
parameters are only retained in memory. If the slave
mysqld is restarted immediately after
issuing RESET SLAVE due to a
server crash or deliberate restart, the connection
parameters are lost. In that case, you must issue a
CHANGE MASTER TO statement
after the server start to respecify the connection
parameters before issuing START
SLAVE. Note that the FILE
setting for these options is deprecated, and will be removed
in a future release.
RESET SLAVE does not change any
replication filter settings (such as
--replicate-ignore-table) for
channels affected by the statement. However, RESET
SLAVE ALL removes the replication filters that were
set on the channels deleted by the statement. When the deleted
channel or channels are recreated, any global replication
filters specified for the slave are copied to them, and no
channel specific replication filters are applied. For more
information see
Section 17.2.5.4, “Replication Channel Based Filters”.
RESET SLAVE causes an implicit commit of an
ongoing transaction. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
If the slave SQL thread was in the middle of replicating
temporary tables when it was stopped, and
RESET SLAVE is issued, these
replicated temporary tables are deleted on the slave.
RESET SLAVE does not reset the heartbeat
period or SSL_VERIFY_SERVER_CERT.
When used on an NDB Cluster replication slave SQL node,
RESET SLAVE clears the
mysql.ndb_apply_status table. You should
keep in mind when using this statement that
ndb_apply_status uses the
NDB storage engine and so is
shared by all SQL nodes attached to the slave cluster.
You can override this behavior by issuing
SET
GLOBAL
@@ndb_clear_apply_status=OFF
prior to executing RESET SLAVE, which keeps
the slave from purging the ndb_apply_status
table in such cases.
START SLAVE [thread_types] [until_option] [connection_options] [channel_option]thread_types: [thread_type[,thread_type] ... ]thread_type: IO_THREAD | SQL_THREADuntil_option: UNTIL { {SQL_BEFORE_GTIDS | SQL_AFTER_GTIDS} =gtid_set| MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS =log_pos| RELAY_LOG_FILE = 'log_name', RELAY_LOG_POS =log_pos| SQL_AFTER_MTS_GAPS }connection_options: [USER='user_name'] [PASSWORD='user_pass'] [DEFAULT_AUTH='plugin_name'] [PLUGIN_DIR='plugin_dir']channel_option: FOR CHANNELchannelgtid_set:uuid_set[,uuid_set] ... | ''uuid_set:uuid:interval[:interval]...uuid:hhhhhhhh-hhhh-hhhh-hhhh-hhhhhhhhhhhhh: [0-9,A-F]interval:n[-n] (n>= 1)
START SLAVE with no
thread_type options starts both of
the slave threads. The I/O thread reads events from the master
server and stores them in the relay log. The SQL thread reads
events from the relay log and executes them.
START SLAVE requires the
REPLICATION_SLAVE_ADMIN privilege
(or the deprecated SUPER
privilege).
If START SLAVE succeeds in
starting the slave threads, it returns without any error.
However, even in that case, it might be that the slave threads
start and then later stop (for example, because they do not
manage to connect to the master or read its binary log, or some
other problem). START SLAVE does
not warn you about this. You must check the slave's error log
for error messages generated by the slave threads, or check that
they are running satisfactorily with SHOW
SLAVE STATUS.
START SLAVE causes an implicit commit of an
ongoing transaction. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
gtid_next must be set to
AUTOMATIC before issuing this statement.
The optional FOR CHANNEL
clause enables you
to name which replication channel the statement applies to.
Providing a channelFOR CHANNEL
clause applies the
channelSTART SLAVE statement to a specific
replication channel. If no clause is named and no extra channels
exist, the statement applies to the default channel. If a
START SLAVE statement does not have a channel
defined when using multiple channels, this statement starts the
specified threads for all channels. This statement is disallowed
for the group_replication_recovery channel.
See Section 17.2.3, “Replication Channels” for more information.
You can add IO_THREAD and
SQL_THREAD options to the statement to name
which of the threads to start. Note that the Group Replication
applier channel (group_replication_applier)
has no I/O thread, only a SQL thread. Specifying the
IO_THREAD or SQL_THREAD
options when you start this channel has no benefit.
MySQL supports pluggable user-password authentication with
START SLAVE with the USER,
PASSWORD, DEFAULT_AUTH and
PLUGIN_DIR options, as described in the
following list:
USER: User name. Cannot be set to an
empty or null string, or left unset if
PASSWORD is used.
PASSWORD: Password.
DEFAULT_AUTH: Name of plugin; default is
MySQL native authentication.
PLUGIN_DIR: Location of plugin.
You cannot use the SQL_THREAD option when
specifying any of USER,
PASSWORD, DEFAULT_AUTH, or
PLUGIN_DIR, unless the
IO_THREAD option is also provided.
See Section 6.2.17, “Pluggable Authentication”, for more information.
If an insecure connection is used with any these options, the server issues the warning Sending passwords in plain text without SSL/TLS is extremely insecure.
START SLAVE ... UNTIL supports two additional
options for use with global transaction identifiers (GTIDs) (see
Section 17.1.3, “Replication with Global Transaction Identifiers”). Each of these takes a set
of one or more global transaction identifiers
gtid_set as an argument (see
GTID Sets, for more
information).
When no thread_type is specified,
START SLAVE UNTIL SQL_BEFORE_GTIDS causes the
slave SQL thread to process transactions until it has reached
the first transaction whose GTID is listed
in the gtid_set. START SLAVE
UNTIL SQL_AFTER_GTIDS causes the slave threads to
process all transactions until the
last transaction in the
gtid_set has been processed by both
threads. In other words, START SLAVE UNTIL
SQL_BEFORE_GTIDS causes the slave SQL thread to
process all transactions occurring before the first GTID in the
gtid_set is reached, and
START SLAVE UNTIL SQL_AFTER_GTIDS causes the
slave threads to handle all transactions, including those whose
GTIDs are found in gtid_set, until
each has encountered a transaction whose GTID is not part of the
set. SQL_BEFORE_GTIDS and
SQL_AFTER_GTIDS each support the
SQL_THREAD and IO_THREAD
options, although using IO_THREAD with them
currently has no effect.
For example, START SLAVE SQL_THREAD UNTIL
SQL_BEFORE_GTIDS =
3E11FA47-71CA-11E1-9E33-C80AA9429562:11-56 causes the
slave SQL thread to process all transactions originating from
the master whose server_uuid is
3E11FA47-71CA-11E1-9E33-C80AA9429562 until it
encounters the transaction having sequence number 11; it then
stops without processing this transaction. In other words, all
transactions up to and including the transaction with sequence
number 10 are processed. Executing START SLAVE
SQL_THREAD UNTIL SQL_AFTER_GTIDS =
3E11FA47-71CA-11E1-9E33-C80AA9429562:11-56, on the
other hand, would cause the slave SQL thread to obtain all
transactions just mentioned from the master, including all of
the transactions having the sequence numbers 11 through 56, and
then to stop without processing any additional transactions;
that is, the transaction having sequence number 56 would be the
last transaction fetched by the slave SQL thread.
When using a multithreaded slave
with slave_preserve_commit_order=0 set, there
is a chance of gaps in the sequence of transactions that have
been executed from the relay log in the following cases:
killing the coordinator thread
after an error occurs in the applier threads
mysqld shuts down unexpectedly
Use the START SLAVE
UNTIL SQL_AFTER_MTS_GAPS statement to cause a
multithreaded slave's worker threads to only run until no
more gaps are found in the relay log, and then to stop. This
statement can take an SQL_THREAD option, but
the effects of the statement remain unchanged. It has no effect
on the slave I/O thread (and cannot be used with the
IO_THREAD option).
Issuing START SLAVE on a
multithreaded slave with gaps in the sequence of transactions
executed from the relay log generates a warning. In such a
situation, the solution is to use
START SLAVE UNTIL
SQL_AFTER_MTS_GAPS, then issue
RESET SLAVE to remove any
remaining relay logs. See
Section 17.5.1.33, “Replication and Transaction Inconsistencies”
for more information.
To change a failed multithreaded slave to single-threaded mode, you can issue the following series of statements, in the order shown:
START SLAVE UNTIL SQL_AFTER_MTS_GAPS; SET @@GLOBAL.slave_parallel_workers = 0; START SLAVE SQL_THREAD;
It is possible to view the entire text of a running
START SLAVE ... statement, including any
USER or PASSWORD values
used, in the output of SHOW
PROCESSLIST. This is also true for the text of a
running CHANGE MASTER TO
statement, including any values it employs for
MASTER_USER or
MASTER_PASSWORD.
START SLAVE sends an
acknowledgment to the user after both the I/O thread and the SQL
thread have started. However, the I/O thread may not yet have
connected. For this reason, a successful
START SLAVE causes
SHOW SLAVE STATUS to show
Slave_SQL_Running=Yes, but this does not
guarantee that Slave_IO_Running=Yes (because
Slave_IO_Running=Yes only if the I/O thread
is running and connected). For more
information, see Section 13.7.7.34, “SHOW SLAVE STATUS Statement”, and
Section 17.1.7.1, “Checking Replication Status”.
An UNTIL clause
(until_option, in the preceding
grammar) may be added to specify that the slave should start and
run until the SQL thread reaches a given point in the master
binary log, specified by the MASTER_LOG_POS
and MASTER_LOG_FILE options, or a given point in the slave relay
log, indicated with the RELAY_LOG_POS and
RELAY_LOG_FILE options. For compressed
transaction payloads, the position must be based on the
compressed Transaction_payload_event. When
the SQL thread reaches the point specified, it stops. If the
SQL_THREAD option is specified in the
statement, it starts only the SQL thread. Otherwise, it starts
both slave threads. If the SQL thread is running, the
UNTIL clause is ignored and a warning is
issued. You cannot use an UNTIL clause with
the IO_THREAD option.
It is also possible with START SLAVE UNTIL to
specify a stop point relative to a given GTID or set of GTIDs
using one of the options SQL_BEFORE_GTIDS or
SQL_AFTER_GTIDS, as explained previously in
this section. When using one of these options, you can specify
SQL_THREAD, IO_THREAD,
both of these, or neither of them. If you specify only
SQL_THREAD, then only the slave SQL thread is
affected by the statement; if only IO_THREAD
is used, then only the slave I/O is affected. If both
SQL_THREAD and IO_THREAD
are used, or if neither of them is used, then both the SQL and
I/O threads are affected by the statement.
For an UNTIL clause, you must specify any one
of the following:
Both a log file name and a position in that file
Either of
SQL_BEFORE_GTIDS or
SQL_AFTER_GTIDS
SQL_AFTER_MTS_GAPS
Do not mix master and relay log options. Do not mix log file options with GTID options.
The UNTIL clause is not supported for
multithreaded slaves except when also using
SQL_AFTER_MTS_GAPS.
If UNTIL is used on a multithreaded slave
without
SQL_AFTER_MTS_GAPS,
the slave operates in single-threaded (sequential) mode for
replication until the point specified by the
UNTIL clause is reached.
Any UNTIL condition is reset by a subsequent
STOP SLAVE statement, a
START SLAVE statement that
includes no UNTIL clause, or a server
restart.
When specifying a log file and position, you can use the
IO_THREAD option with START SLAVE
... UNTIL even though only the SQL thread is affected
by this statement. The IO_THREAD option is
ignored in such cases. The preceding restriction does not apply
when using one of the GTID options
(SQL_BEFORE_GTIDS and
SQL_AFTER_GTIDS); the GTID options support
both SQL_THREAD and
IO_THREAD, as explained previously in this
section.
The UNTIL clause can be useful for debugging
replication, or to cause replication to proceed until just
before the point where you want to avoid having the slave
replicate an event. For example, if an unwise
DROP TABLE statement was executed
on the master, you can use UNTIL to tell the
slave to execute up to that point but no farther. To find what
the event is, use mysqlbinlog with the master
binary log or slave relay log, or by using a
SHOW BINLOG EVENTS statement.
If you are using UNTIL to have the slave
process replicated queries in sections, it is recommended that
you start the slave with the
--skip-slave-start option to
prevent the SQL thread from running when the slave server
starts. It is probably best to use this option in an option file
rather than on the command line, so that an unexpected server
restart does not cause it to be forgotten.
The SHOW SLAVE STATUS statement
includes output fields that display the current values of the
UNTIL condition.
In very old versions of MySQL (before 4.0.5), this statement was
called SLAVE START. That syntax now produces
an error.
STOP SLAVE [thread_types] [channel_option]thread_types: [thread_type[,thread_type] ... ]thread_type: IO_THREAD | SQL_THREADchannel_option: FOR CHANNELchannel
Stops the slave threads. STOP
SLAVE requires the
REPLICATION_SLAVE_ADMIN privilege
(or the deprecated SUPER
privilege). Recommended best practice is to execute
STOP SLAVE on the slave before stopping the
slave server (see Section 5.1.18, “The Server Shutdown Process”, for more
information).
When using the row-based logging format:
You should execute STOP SLAVE or
STOP SLAVE SQL_THREAD on the slave prior to
shutting down the slave server if you are replicating any tables
that use a nontransactional storage engine (see the
Note later in this section).
Like START SLAVE, this statement
may be used with the IO_THREAD and
SQL_THREAD options to name the thread or
threads to be stopped. Note that the Group Replication applier
channel (group_replication_applier) has no
I/O thread, only a SQL thread. Using the
SQL_THREAD option therefore stops this
channel completely.
STOP SLAVE causes an implicit commit of an
ongoing transaction. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
gtid_next must be set to
AUTOMATIC before issuing this statement.
You can control how long STOP SLAVE waits
before timing out by setting the
rpl_stop_slave_timeout system
variable. This can be used to avoid deadlocks between
STOP SLAVE and other slave SQL statements
using different client connections to the slave. When the
timeout value is reached, the issuing client returns an error
message and stops waiting, but the STOP SLAVE
instruction remains in effect. Once the slave threads are no
longer busy, the STOP SLAVE statement is
executed and the slave stops.
Some CHANGE MASTER TO statements are allowed
while the slave is running, depending on the states of the slave
SQL and I/O threads. However, using STOP
SLAVE prior to executing CHANGE MASTER
TO in such cases is still supported. See
Section 13.4.2.1, “CHANGE MASTER TO Statement”, and
Section 17.4.8, “Switching Masters During Failover”, for more
information.
The optional FOR CHANNEL
clause enables you
to name which replication channel the statement applies to.
Providing a channelFOR CHANNEL
clause applies the
channelSTOP SLAVE statement to a specific
replication channel. If no channel is named and no extra
channels exist, the statement applies to the default channel. If
a STOP SLAVE statement does not name a
channel when using multiple channels, this statement stops the
specified threads for all channels. This statement cannot be
used with the group_replication_recovery
channel. See Section 17.2.3, “Replication Channels” for more
information.
When using statement-based replication:
changing the master while it has open temporary tables is
potentially unsafe. This is one of the reasons why
statement-based replication of temporary tables is not
recommended. You can find out whether there are any temporary
tables on the slave by checking the value of
Slave_open_temp_tables; when
using statement-based replication, this value should be 0 before
executing CHANGE MASTER TO. If there are any
temporary tables open on the slave, issuing a CHANGE
MASTER TO statement after issuing a STOP
SLAVE causes an
ER_WARN_OPEN_TEMP_TABLES_MUST_BE_ZERO
warning.
When using a multithreaded slave
(slave_parallel_workers is a
nonzero value), any gaps in the sequence of transactions
executed from the relay log are closed as part of stopping the
worker threads. If the slave is stopped unexpectedly (for
example due to an error in a worker thread, or another thread
issuing KILL) while a
STOP SLAVE statement is
executing, the sequence of executed transactions from the relay
log may become inconsistent. See
Section 17.5.1.33, “Replication and Transaction Inconsistencies”
for more information.
If the current replication event group has modified one or more
nontransactional tables, STOP SLAVE waits for up to 60 seconds
for the event group to complete, unless you issue a
KILL QUERY or
KILL CONNECTION
statement for the slave SQL thread. If the event group remains
incomplete after the timeout, an error message is logged.
This section provides information about the statements used for controlling group replication.
START GROUP_REPLICATION
Starts group replication. This statement requires the
GROUP_REPLICATION_ADMIN privilege
(or the deprecated SUPER
privilege). If
super_read_only=ON and the
member should join as a primary,
super_read_only is set to
OFF once Group Replication successfully
starts.
STOP GROUP_REPLICATION
Stops Group Replication. This statement requires the
GROUP_REPLICATION_ADMIN privilege
(or the deprecated SUPER
privilege). As soon as you issue STOP
GROUP_REPLICATION the member is set to
super_read_only=ON, which
ensures that no writes can be made to the member while Group
Replication stops. Any other replication channels running on the
member are also stopped.
Use this statement with extreme caution because it removes the server instance from the group, meaning it is no longer protected by Group Replication's consistency guarantee mechanisms. To be completely safe, ensure that your applications can no longer connect to the instance before issuing this statement to avoid any chance of stale reads.
The following function enables you to configure which member of a single-primary replication group is the primary.
group_replication_set_as_primary()
Appoints a specific member of the group as the new primary,
overriding any election process. Pass in
member_uuid which is the
server_uuid of the member
that you want to become the new primary. Must be issued on a
member of a replication group running in single-primary
mode.
Syntax:
STRING group_replication_set_as_primary(member_uuid)Return value:
A string containing the result of the operation, for example whether it was successful or not.
Example:
SELECT group_replication_set_as_primary(member_uuid)For more information, see Section 18.4.1.1, “Changing a Group's Primary Member”
The following functions enable you to control the mode which a replication group is running in, either single-primary or multi-primary mode.
group_replication_switch_to_single_primary_mode()
Changes a group running in multi-primary mode to
single-primary mode, without the need to stop Group
Replication. Must be issued on a member of a replication
group running in multi-primary mode. When you change to
single-primary mode, strict consistency checks are also
disabled on all group members, as required in single-primary
mode
(group_replication_enforce_update_everywhere_checks=OFF).
Syntax:
STRING group_replication_switch_to_single_primary_mode([str])Arguments:
str: A string containing the
UUID of a member of the group which should become the
new single primary. Other members of the group become
secondaries.
Return value:
A string containing the result of the operation, for example whether it was successful or not.
Example:
SELECT group_replication_switch_to_single_primary_mode(member_uuid);
For more information, see Section 18.4.1.2, “Changing a Group's Mode”
group_replication_switch_to_multi_primary_mode()
Changes a group running in single-primary mode to multi-primary mode. Must be issued on a member of a replication group running in single-primary mode.
Syntax:
STRING group_replication_switch_to_multi_primary_mode()
This function has no parameters.
Return value:
A string containing the result of the operation, for example whether it was successful or not.
Example:
SELECT group_replication_switch_to_multi_primary_mode()
All members which belong to the group become primaries.
For more information, see Section 18.4.1.2, “Changing a Group's Mode”
The following functions enable you to inspect and configure the maximum number of consensus instances that a group can execute in parallel.
group_replication_get_write_concurrency()
Check the maximum number of consensus instances that a group can execute in parallel.
Syntax:
INT group_replication_get_write_concurrency()
This function has no parameters.
Return value:
The maximum number of consensus instances currently set for the group.
Example:
SELECT group_replication_get_write_concurrency()
For more information, see Section 18.4.1.3, “Using Group Replication Group Write Consensus”.
group_replication_set_write_concurrency()
Configures the maximum number of consensus instances that a
group can execute in parallel. The
GROUP_REPLICATION_ADMIN
privilege is required to use this UDF.
Syntax:
STRING group_replication_set_write_concurrency(instances)
Arguments:
members: Sets the maximum
number of consensus instances that a group can execute
in parallel. Default value is 10, valid values are
integers in the range of 10 to 200.
Return value:
Any resulting error as a string.
Example:
SELECT group_replication_set_write_concurrency(instances);
For more information, see Section 18.4.1.3, “Using Group Replication Group Write Consensus”.
The following functions enable you to inspect and configure the Group Replication communication protocol version that is used by a replication group.
group_replication_get_communication_protocol()
Inspect the Group Replication communication protocol version that is currently in use for a group.
Syntax:
STRING group_replication_get_communication_protocol()
This function has no parameters.
Return value:
The oldest MySQL Server version that can join this group and
use the group's communication protocol. Versions from MySQL
5.7.14 allow compression of messages, and versions from
MySQL 8.0.16 also allow fragmentation of messages. Note that
the
group_replication_get_communication_protocol()
UDF returns the minimum MySQL version that the group
supports, which might differ from the version number that
was passed to the
group_replication_set_communication_protocol()
UDF, and from the MySQL Server version that is installed on
the member where you use the UDF.
If the protocol cannot be inspected because this server instance does not belong to a replication group, an error is returned as a string.
Example:
SELECT group_replication_get_communication_protocol();
+------------------------------------------------+
| group_replication_get_communication_protocol() |
+------------------------------------------------+
| 8.0.16 |
+------------------------------------------------+
For more information, see Section 18.4.1.4, “Setting a Group's Communication Protocol Version”.
group_replication_set_communication_protocol()
Downgrade the Group Replication communication protocol
version of a group so that members at earlier releases can
join, or upgrade the Group Replication communication
protocol version of a group after upgrading MySQL Server on
all members. The
GROUP_REPLICATION_ADMIN
privilege is required to use this UDF, and all existing
group members must be online when you issue the statement,
with no loss of majority.
For MySQL InnoDB cluster, the communication protocol version is managed automatically whenever the cluster topology is changed using AdminAPI operations. You do not have to use these UDFs yourself for an InnoDB cluster.
Syntax:
STRING group_replication_set_communication_protocol(version)
Arguments:
version: For a downgrade,
specify the MySQL Server version of the prospective
group member that has the oldest installed server
version. In this case, the command makes the group fall
back to a communication protocol compatible with that
server version if possible. The minimum server version
that you can specify is MySQL 5.7.14. For an upgrade,
specify the new MySQL Server version to which the
existing group members have been upgraded.
Return value:
A string containing the result of the operation, for example whether it was successful or not.
Example:
SELECT group_replication_set_communication_protocol("5.7.25");
For more information, see Section 18.4.1.4, “Setting a Group's Communication Protocol Version”.
MySQL 8.0 provides support for server-side prepared statements. This support takes advantage of the efficient client/server binary protocol. Using prepared statements with placeholders for parameter values has the following benefits:
Less overhead for parsing the statement each time it is
executed. Typically, database applications process large volumes
of almost-identical statements, with only changes to literal or
variable values in clauses such as WHERE for
queries and deletes, SET for updates, and
VALUES for inserts.
Protection against SQL injection attacks. The parameter values can contain unescaped SQL quote and delimiter characters.
You can use server-side prepared statements through client
programming interfaces, including the MySQL C
API client library for C programs,
MySQL Connector/J for Java
programs, and MySQL Connector/NET for
programs using .NET technologies. For example, the C API provides a
set of function calls that make up its prepared statement API. See
Section 28.7.7, “C API Prepared Statements”. Other language
interfaces can provide support for prepared statements that use the
binary protocol by linking in the C client library, one example
being the
mysqli
extension, available in PHP 5.0 and later.
An alternative SQL interface to prepared statements is available. This interface is not as efficient as using the binary protocol through a prepared statement API, but requires no programming because it is available directly at the SQL level:
You can use it when no programming interface is available to you.
You can use it from any program that can send SQL statements to the server to be executed, such as the mysql client program.
You can use it even if the client is using an old version of the client library, as long as you connect to a server running MySQL 4.1 or higher.
SQL syntax for prepared statements is intended to be used for situations such as these:
To test how prepared statements work in your application before coding it.
To use prepared statements when you do not have access to a programming API that supports them.
To interactively troubleshoot application issues with prepared statements.
To create a test case that reproduces a problem with prepared statements, so that you can file a bug report.
SQL syntax for prepared statements is based on three SQL statements:
PREPARE prepares a statement for
execution (see Section 13.5.1, “PREPARE Statement”).
EXECUTE executes a prepared
statement (see Section 13.5.2, “EXECUTE Statement”).
DEALLOCATE PREPARE releases a
prepared statement (see Section 13.5.3, “DEALLOCATE PREPARE Statement”).
The following examples show two equivalent ways of preparing a statement that computes the hypotenuse of a triangle given the lengths of the two sides.
The first example shows how to create a prepared statement by using a string literal to supply the text of the statement:
mysql>PREPARE stmt1 FROM 'SELECT SQRT(POW(?,2) + POW(?,2)) AS hypotenuse';mysql>SET @a = 3;mysql>SET @b = 4;mysql>EXECUTE stmt1 USING @a, @b;+------------+ | hypotenuse | +------------+ | 5 | +------------+ mysql>DEALLOCATE PREPARE stmt1;
The second example is similar, but supplies the text of the statement as a user variable:
mysql>SET @s = 'SELECT SQRT(POW(?,2) + POW(?,2)) AS hypotenuse';mysql>PREPARE stmt2 FROM @s;mysql>SET @a = 6;mysql>SET @b = 8;mysql>EXECUTE stmt2 USING @a, @b;+------------+ | hypotenuse | +------------+ | 10 | +------------+ mysql>DEALLOCATE PREPARE stmt2;
Here is an additional example that demonstrates how to choose the table on which to perform a query at runtime, by storing the name of the table as a user variable:
mysql>USE test;mysql>CREATE TABLE t1 (a INT NOT NULL);mysql>INSERT INTO t1 VALUES (4), (8), (11), (32), (80);mysql>SET @table = 't1';mysql>SET @s = CONCAT('SELECT * FROM ', @table);mysql>PREPARE stmt3 FROM @s;mysql>EXECUTE stmt3;+----+ | a | +----+ | 4 | | 8 | | 11 | | 32 | | 80 | +----+ mysql>DEALLOCATE PREPARE stmt3;
A prepared statement is specific to the session in which it was created. If you terminate a session without deallocating a previously prepared statement, the server deallocates it automatically.
A prepared statement is also global to the session. If you create a prepared statement within a stored routine, it is not deallocated when the stored routine ends.
To guard against too many prepared statements being created
simultaneously, set the
max_prepared_stmt_count system
variable. To prevent the use of prepared statements, set the value
to 0.
The following SQL statements can be used as prepared statements:
ALTER TABLE
ALTER USER
ANALYZE TABLE
CACHE INDEX
CALL
CHANGE MASTER
CHECKSUM {TABLE | TABLES}
COMMIT
{CREATE | DROP} INDEX
{CREATE | RENAME | DROP} DATABASE
{CREATE | DROP} TABLE
{CREATE | RENAME | DROP} USER
{CREATE | DROP} VIEW
DELETE
DO
FLUSH {TABLE | TABLES | TABLES WITH READ LOCK | HOSTS | PRIVILEGES
| LOGS | STATUS | MASTER | SLAVE | USER_RESOURCES}
GRANT
INSERT
INSTALL PLUGIN
KILL
LOAD INDEX INTO CACHE
OPTIMIZE TABLE
RENAME TABLE
REPAIR TABLE
REPLACE
RESET {MASTER | SLAVE}
REVOKE
SELECT
SET
SHOW {WARNINGS | ERRORS}
SHOW BINLOG EVENTS
SHOW CREATE {PROCEDURE | FUNCTION | EVENT | TABLE | VIEW}
SHOW {MASTER | BINARY} LOGS
SHOW {MASTER | SLAVE} STATUS
SLAVE {START | STOP}
TRUNCATE TABLE
UNINSTALL PLUGIN
UPDATEFor compliance with the SQL standard, which states that diagnostics statements are not preparable, MySQL does not support the following as prepared statements:
SHOW WARNINGS, SHOW COUNT(*)
WARNINGS
SHOW ERRORS, SHOW COUNT(*)
ERRORS
Statements containing any reference to the
warning_count or
error_count system variable.
Other statements are not supported in MySQL 8.0.
Generally, statements not permitted in SQL prepared statements are also not permitted in stored programs. Exceptions are noted in Section 24.8, “Restrictions on Stored Programs”.
Metadata changes to tables or views referred to by prepared statements are detected and cause automatic repreparation of the statement when it is next executed. For more information, see Section 8.10.3, “Caching of Prepared Statements and Stored Programs”.
Placeholders can be used for the arguments of the
LIMIT clause when using prepared statements. See
Section 13.2.10, “SELECT Statement”.
In prepared CALL statements used with
PREPARE and
EXECUTE, placeholder support for
OUT and INOUT parameters is
available beginning with MySQL 8.0. See
Section 13.2.1, “CALL Statement”, for an example and a workaround for earlier
versions. Placeholders can be used for IN
parameters regardless of version.
SQL syntax for prepared statements cannot be used in nested fashion.
That is, a statement passed to
PREPARE cannot itself be a
PREPARE,
EXECUTE, or
DEALLOCATE PREPARE statement.
SQL syntax for prepared statements is distinct from using prepared
statement API calls. For example, you cannot use the
mysql_stmt_prepare() C API function
to prepare a PREPARE,
EXECUTE, or
DEALLOCATE PREPARE statement.
SQL syntax for prepared statements can be used within stored
procedures, but not in stored functions or triggers. However, a
cursor cannot be used for a dynamic statement that is prepared and
executed with PREPARE and
EXECUTE. The statement for a cursor
is checked at cursor creation time, so the statement cannot be
dynamic.
SQL syntax for prepared statements does not support multi-statements
(that is, multiple statements within a single string separated by
; characters).
To write C programs that use the CALL
SQL statement to execute stored procedures that contain prepared
statements, the CLIENT_MULTI_RESULTS flag must be
enabled. This is because each CALL
returns a result to indicate the call status, in addition to any
result sets that might be returned by statements executed within the
procedure.
CLIENT_MULTI_RESULTS can be enabled when you call
mysql_real_connect(), either
explicitly by passing the CLIENT_MULTI_RESULTS
flag itself, or implicitly by passing
CLIENT_MULTI_STATEMENTS (which also enables
CLIENT_MULTI_RESULTS). For additional
information, see Section 13.2.1, “CALL Statement”.
PREPAREstmt_nameFROMpreparable_stmt
The PREPARE statement prepares a
SQL statement and assigns it a name,
stmt_name, by which to refer to the
statement later. The prepared statement is executed with
EXECUTE and released with
DEALLOCATE PREPARE. For examples,
see Section 13.5, “Prepared Statements”.
Statement names are not case-sensitive.
preparable_stmt is either a string
literal or a user variable that contains the text of the SQL
statement. The text must represent a single statement, not
multiple statements. Within the statement, ?
characters can be used as parameter markers to indicate where data
values are to be bound to the query later when you execute it. The
? characters should not be enclosed within
quotation marks, even if you intend to bind them to string values.
Parameter markers can be used only where data values should
appear, not for SQL keywords, identifiers, and so forth.
If a prepared statement with the given name already exists, it is deallocated implicitly before the new statement is prepared. This means that if the new statement contains an error and cannot be prepared, an error is returned and no statement with the given name exists.
The scope of a prepared statement is the session within which it is created, which as several implications:
A prepared statement created in one session is not available to other sessions.
When a session ends, whether normally or abnormally, its prepared statements no longer exist. If auto-reconnect is enabled, the client is not notified that the connection was lost. For this reason, clients may wish to disable auto-reconnect. See Section 28.7.27, “C API Automatic Reconnection Control”.
A prepared statement created within a stored program continues to exist after the program finishes executing and can be executed outside the program later.
A statement prepared in stored program context cannot refer to stored procedure or function parameters or local variables because they go out of scope when the program ends and would be unavailable were the statement to be executed later outside the program. As a workaround, refer instead to user-defined variables, which also have session scope; see Section 9.4, “User-Defined Variables”.
EXECUTEstmt_name[USING @var_name[, @var_name] ...]
After preparing a statement with
PREPARE, you execute it with an
EXECUTE statement that refers to
the prepared statement name. If the prepared statement contains
any parameter markers, you must supply a USING
clause that lists user variables containing the values to be bound
to the parameters. Parameter values can be supplied only by user
variables, and the USING clause must name
exactly as many variables as the number of parameter markers in
the statement.
You can execute a given prepared statement multiple times, passing different variables to it or setting the variables to different values before each execution.
For examples, see Section 13.5, “Prepared Statements”.
{DEALLOCATE | DROP} PREPARE stmt_name
To deallocate a prepared statement produced with
PREPARE, use a
DEALLOCATE PREPARE statement that
refers to the prepared statement name. Attempting to execute a
prepared statement after deallocating it results in an error. If
too many prepared statements are created and not deallocated by
either the DEALLOCATE PREPARE statement or the
end of the session, you might encounter the upper limit enforced
by the max_prepared_stmt_count
system variable.
For examples, see Section 13.5, “Prepared Statements”.
This section describes the syntax for the
BEGIN ... END
compound statement and other statements that can be used in the body
of stored programs: Stored procedures and functions, triggers, and
events. These objects are defined in terms of SQL code that is
stored on the server for later invocation (see
Chapter 24, Stored Objects).
A compound statement is a block that can contain other blocks; declarations for variables, condition handlers, and cursors; and flow control constructs such as loops and conditional tests.
[begin_label:] BEGIN [statement_list] END [end_label]
BEGIN ... END
syntax is used for writing compound statements, which can appear
within stored programs (stored procedures and functions, triggers,
and events). A compound statement can contain multiple statements,
enclosed by the BEGIN and
END keywords.
statement_list represents a list of one
or more statements, each terminated by a semicolon
(;) statement delimiter. The
statement_list itself is optional, so
the empty compound statement (BEGIN END) is
legal.
BEGIN ... END
blocks can be nested.
Use of multiple statements requires that a client is able to send
statement strings containing the ; statement
delimiter. In the mysql command-line client,
this is handled with the delimiter command.
Changing the ; end-of-statement delimiter (for
example, to //) permit ; to
be used in a program body. For an example, see
Section 24.1, “Defining Stored Programs”.
A BEGIN ...
END block can be labeled. See
Section 13.6.2, “Statement Labels”.
The optional [NOT] ATOMIC clause is not
supported. This means that no transactional savepoint is set at
the start of the instruction block and the
BEGIN clause used in this context has no effect
on the current transaction.
Within all stored programs, the parser treats
BEGIN [WORK]
as the beginning of a
BEGIN ...
END block. To begin a transaction in this context, use
START
TRANSACTION instead.
[begin_label:] BEGIN [statement_list] END [end_label] [begin_label:] LOOPstatement_listEND LOOP [end_label] [begin_label:] REPEATstatement_listUNTILsearch_conditionEND REPEAT [end_label] [begin_label:] WHILEsearch_conditionDOstatement_listEND WHILE [end_label]
Labels are permitted for
BEGIN ... END
blocks and for the LOOP,
REPEAT, and
WHILE statements. Label use for
those statements follows these rules:
begin_label must be followed by a
colon.
begin_label can be given without
end_label. If
end_label is present, it must be
the same as begin_label.
end_label cannot be given without
begin_label.
Labels at the same nesting level must be distinct.
Labels can be up to 16 characters long.
To refer to a label within the labeled construct, use an
ITERATE or
LEAVE statement. The following
example uses those statements to continue iterating or terminate
the loop:
CREATE PROCEDURE doiterate(p1 INT)
BEGIN
label1: LOOP
SET p1 = p1 + 1;
IF p1 < 10 THEN ITERATE label1; END IF;
LEAVE label1;
END LOOP label1;
END;The scope of a block label does not include the code for handlers declared within the block. For details, see Section 13.6.7.2, “DECLARE ... HANDLER Statement”.
The DECLARE statement is used to
define various items local to a program:
Local variables. See Section 13.6.4, “Variables in Stored Programs”.
Conditions and handlers. See Section 13.6.7, “Condition Handling”.
Cursors. See Section 13.6.6, “Cursors”.
DECLARE is permitted only inside a
BEGIN ... END
compound statement and must be at its start, before any other
statements.
Declarations must follow a certain order. Cursor declarations must appear before handler declarations. Variable and condition declarations must appear before cursor or handler declarations.
System variables and user-defined variables can be used in stored
programs, just as they can be used outside stored-program context.
In addition, stored programs can use DECLARE to
define local variables, and stored routines (procedures and
functions) can be declared to take parameters that communicate
values between the routine and its caller.
To declare local variables, use the
DECLARE
statement, as described in
Section 13.6.4.1, “Local Variable DECLARE Statement”.
Variables can be set directly with the
SET
statement. See Section 13.7.6.1, “SET Syntax for Variable Assignment”.
Results from queries can be retrieved into local variables
using SELECT ...
INTO or by
opening a cursor and using
var_listFETCH ... INTO
. See
Section 13.2.10.1, “SELECT ... INTO Statement”, and Section 13.6.6, “Cursors”.
var_list
For information about the scope of local variables and how MySQL resolves ambiguous names, see Section 13.6.4.2, “Local Variable Scope and Resolution”.
It is not permitted to assign the value DEFAULT
to stored procedure or function parameters or stored program local
variables (for example with a SET
statement). In MySQL 8.0, this results in a syntax
error.
var_name = DEFAULT
DECLAREvar_name[,var_name] ...type[DEFAULTvalue]
This statement declares local variables within stored programs.
To provide a default value for a variable, include a
DEFAULT clause. The value can be specified as
an expression; it need not be a constant. If the
DEFAULT clause is missing, the initial value
is NULL.
Local variables are treated like stored routine parameters with respect to data type and overflow checking. See Section 13.1.17, “CREATE PROCEDURE and CREATE FUNCTION Statements”.
Variable declarations must appear before cursor or handler declarations.
Local variable names are not case-sensitive. Permissible characters and quoting rules are the same as for other identifiers, as described in Section 9.2, “Schema Object Names”.
The scope of a local variable is the
BEGIN ...
END block within which it is declared. The variable
can be referred to in blocks nested within the declaring block,
except those blocks that declare a variable with the same name.
For examples of variable declarations, see Section 13.6.4.2, “Local Variable Scope and Resolution”.
The scope of a local variable is the
BEGIN ...
END block within which it is declared. The variable
can be referred to in blocks nested within the declaring block,
except those blocks that declare a variable with the same name.
Because local variables are in scope only during stored program
execution, references to them are not permitted in prepared
statements created within a stored program. Prepared statement
scope is the current session, not the stored program, so the
statement could be executed after the program ends, at which
point the variables would no longer be in scope. For example,
SELECT ... INTO
cannot be used as
a prepared statement. This restriction also applies to stored
procedure and function parameters. See
Section 13.5.1, “PREPARE Statement”.
local_var
A local variable should not have the same name as a table
column. If an SQL statement, such as a
SELECT ...
INTO statement, contains a reference to a column and a
declared local variable with the same name, MySQL currently
interprets the reference as the name of a variable. Consider the
following procedure definition:
CREATE PROCEDURE sp1 (x VARCHAR(5))
BEGIN
DECLARE xname VARCHAR(5) DEFAULT 'bob';
DECLARE newname VARCHAR(5);
DECLARE xid INT;
SELECT xname, id INTO newname, xid
FROM table1 WHERE xname = xname;
SELECT newname;
END;
MySQL interprets xname in the
SELECT statement as a reference
to the xname variable
rather than the xname
column. Consequently, when the procedure
sp1()is called, the
newname variable returns the value
'bob' regardless of the value of the
table1.xname column.
Similarly, the cursor definition in the following procedure
contains a SELECT statement that
refers to xname. MySQL interprets this as a
reference to the variable of that name rather than a column
reference.
CREATE PROCEDURE sp2 (x VARCHAR(5))
BEGIN
DECLARE xname VARCHAR(5) DEFAULT 'bob';
DECLARE newname VARCHAR(5);
DECLARE xid INT;
DECLARE done TINYINT DEFAULT 0;
DECLARE cur1 CURSOR FOR SELECT xname, id FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
read_loop: LOOP
FETCH FROM cur1 INTO newname, xid;
IF done THEN LEAVE read_loop; END IF;
SELECT newname;
END LOOP;
CLOSE cur1;
END;
MySQL supports the IF,
CASE,
ITERATE,
LEAVE
LOOP,
WHILE, and
REPEAT constructs for flow control
within stored programs. It also supports
RETURN within stored functions.
Many of these constructs contain other statements, as indicated by
the grammar specifications in the following sections. Such
constructs may be nested. For example, an
IF statement might contain a
WHILE loop, which itself contains a
CASE statement.
MySQL does not support FOR loops.
CASEcase_valueWHENwhen_valueTHENstatement_list[WHENwhen_valueTHENstatement_list] ... [ELSEstatement_list] END CASE
Or:
CASE
WHEN search_condition THEN statement_list
[WHEN search_condition THEN statement_list] ...
[ELSE statement_list]
END CASE
The CASE statement for stored
programs implements a complex conditional construct.
There is also a CASE
expr, which differs from the
CASE
statement described here. See
Section 12.4, “Control Flow Functions”. The
CASE statement cannot have an
ELSE NULL clause, and it is terminated with
END CASE instead of END.
For the first syntax, case_value is
an expression. This value is compared to the
when_value expression in each
WHEN clause until one of them is equal. When
an equal when_value is found, the
corresponding THEN clause
statement_list executes. If no
when_value is equal, the
ELSE clause
statement_list executes, if there is
one.
This syntax cannot be used to test for equality with
NULL because NULL = NULL
is false. See Section 3.3.4.6, “Working with NULL Values”.
For the second syntax, each WHEN clause
search_condition expression is
evaluated until one is true, at which point its corresponding
THEN clause
statement_list executes. If no
search_condition is equal, the
ELSE clause
statement_list executes, if there is
one.
If no when_value or
search_condition matches the value
tested and the CASE statement
contains no ELSE clause, a Case
not found for CASE statement error results.
Each statement_list consists of one
or more SQL statements; an empty
statement_list is not permitted.
To handle situations where no value is matched by any
WHEN clause, use an ELSE
containing an empty
BEGIN ...
END block, as shown in this example. (The indentation
used here in the ELSE clause is for purposes
of clarity only, and is not otherwise significant.)
DELIMITER |
CREATE PROCEDURE p()
BEGIN
DECLARE v INT DEFAULT 1;
CASE v
WHEN 2 THEN SELECT v;
WHEN 3 THEN SELECT 0;
ELSE
BEGIN
END;
END CASE;
END;
|
IFsearch_conditionTHENstatement_list[ELSEIFsearch_conditionTHENstatement_list] ... [ELSEstatement_list] END IF
The IF statement for stored
programs implements a basic conditional construct.
There is also an IF()
function, which differs from the
IF
statement described here. See
Section 12.4, “Control Flow Functions”. The
IF statement can have
THEN, ELSE, and
ELSEIF clauses, and it is terminated with
END IF.
If a given search_condition evaluates
to true, the corresponding THEN or
ELSEIF clause
statement_list executes. If no
search_condition matches, the
ELSE clause
statement_list executes.
Each statement_list consists of one
or more SQL statements; an empty
statement_list is not permitted.
An IF ... END IF block, like all other
flow-control blocks used within stored programs, must be
terminated with a semicolon, as shown in this example:
DELIMITER //
CREATE FUNCTION SimpleCompare(n INT, m INT)
RETURNS VARCHAR(20)
BEGIN
DECLARE s VARCHAR(20);
IF n > m THEN SET s = '>';
ELSEIF n = m THEN SET s = '=';
ELSE SET s = '<';
END IF;
SET s = CONCAT(n, ' ', s, ' ', m);
RETURN s;
END //
DELIMITER ;
As with other flow-control constructs, IF ... END
IF blocks may be nested within other flow-control
constructs, including other IF
statements. Each IF must be
terminated by its own END IF followed by a
semicolon. You can use indentation to make nested flow-control
blocks more easily readable by humans (although this is not
required by MySQL), as shown here:
DELIMITER //
CREATE FUNCTION VerboseCompare (n INT, m INT)
RETURNS VARCHAR(50)
BEGIN
DECLARE s VARCHAR(50);
IF n = m THEN SET s = 'equals';
ELSE
IF n > m THEN SET s = 'greater';
ELSE SET s = 'less';
END IF;
SET s = CONCAT('is ', s, ' than');
END IF;
SET s = CONCAT(n, ' ', s, ' ', m, '.');
RETURN s;
END //
DELIMITER ;
In this example, the inner IF is
evaluated only if n is not equal to
m.
ITERATE label
ITERATE can appear only within
LOOP,
REPEAT, and
WHILE statements.
ITERATE means “start the
loop again.”
For an example, see Section 13.6.5.5, “LOOP Statement”.
LEAVE label
This statement is used to exit the flow control construct that
has the given label. If the label is for the outermost stored
program block, LEAVE exits the
program.
LEAVE can be used within
BEGIN ...
END or loop constructs
(LOOP,
REPEAT,
WHILE).
For an example, see Section 13.6.5.5, “LOOP Statement”.
[begin_label:] LOOPstatement_listEND LOOP [end_label]
LOOP implements a simple loop
construct, enabling repeated execution of the statement list,
which consists of one or more statements, each terminated by a
semicolon (;) statement delimiter. The
statements within the loop are repeated until the loop is
terminated. Usually, this is accomplished with a
LEAVE statement. Within a stored
function, RETURN can also be
used, which exits the function entirely.
Neglecting to include a loop-termination statement results in an infinite loop.
A LOOP statement can be labeled.
For the rules regarding label use, see
Section 13.6.2, “Statement Labels”.
Example:
CREATE PROCEDURE doiterate(p1 INT)
BEGIN
label1: LOOP
SET p1 = p1 + 1;
IF p1 < 10 THEN
ITERATE label1;
END IF;
LEAVE label1;
END LOOP label1;
SET @x = p1;
END;
[begin_label:] REPEATstatement_listUNTILsearch_conditionEND REPEAT [end_label]
The statement list within a
REPEAT statement is repeated
until the search_condition expression
is true. Thus, a REPEAT always
enters the loop at least once.
statement_list consists of one or
more statements, each terminated by a semicolon
(;) statement delimiter.
A REPEAT statement can be
labeled. For the rules regarding label use, see
Section 13.6.2, “Statement Labels”.
Example:
mysql>delimiter //mysql>CREATE PROCEDURE dorepeat(p1 INT)BEGINSET @x = 0;REPEATSET @x = @x + 1;UNTIL @x > p1 END REPEAT;END//Query OK, 0 rows affected (0.00 sec) mysql>CALL dorepeat(1000)//Query OK, 0 rows affected (0.00 sec) mysql>SELECT @x//+------+ | @x | +------+ | 1001 | +------+ 1 row in set (0.00 sec)
RETURN expr
The RETURN statement terminates
execution of a stored function and returns the value
expr to the function caller. There
must be at least one RETURN
statement in a stored function. There may be more than one if
the function has multiple exit points.
This statement is not used in stored procedures, triggers, or
events. The LEAVE statement can
be used to exit a stored program of those types.
[begin_label:] WHILEsearch_conditionDOstatement_listEND WHILE [end_label]
The statement list within a WHILE
statement is repeated as long as the
search_condition expression is true.
statement_list consists of one or
more SQL statements, each terminated by a semicolon
(;) statement delimiter.
A WHILE statement can be labeled.
For the rules regarding label use, see
Section 13.6.2, “Statement Labels”.
Example:
CREATE PROCEDURE dowhile()
BEGIN
DECLARE v1 INT DEFAULT 5;
WHILE v1 > 0 DO
...
SET v1 = v1 - 1;
END WHILE;
END;
MySQL supports cursors inside stored programs. The syntax is as in embedded SQL. Cursors have these properties:
Asensitive: The server may or may not make a copy of its result table
Read only: Not updatable
Nonscrollable: Can be traversed only in one direction and cannot skip rows
Cursor declarations must appear before handler declarations and after variable and condition declarations.
Example:
CREATE PROCEDURE curdemo()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE a CHAR(16);
DECLARE b, c INT;
DECLARE cur1 CURSOR FOR SELECT id,data FROM test.t1;
DECLARE cur2 CURSOR FOR SELECT i FROM test.t2;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur1;
OPEN cur2;
read_loop: LOOP
FETCH cur1 INTO a, b;
FETCH cur2 INTO c;
IF done THEN
LEAVE read_loop;
END IF;
IF b < c THEN
INSERT INTO test.t3 VALUES (a,b);
ELSE
INSERT INTO test.t3 VALUES (a,c);
END IF;
END LOOP;
CLOSE cur1;
CLOSE cur2;
END;
CLOSE cursor_name
This statement closes a previously opened cursor. For an example, see Section 13.6.6, “Cursors”.
An error occurs if the cursor is not open.
If not closed explicitly, a cursor is closed at the end of the
BEGIN ...
END block in which it was declared.
DECLAREcursor_nameCURSOR FORselect_statement
This statement declares a cursor and associates it with a
SELECT statement that retrieves
the rows to be traversed by the cursor. To fetch the rows later,
use a FETCH statement. The number
of columns retrieved by the
SELECT statement must match the
number of output variables specified in the
FETCH statement.
The SELECT statement cannot have
an INTO clause.
Cursor declarations must appear before handler declarations and after variable and condition declarations.
A stored program may contain multiple cursor declarations, but each cursor declared in a given block must have a unique name. For an example, see Section 13.6.6, “Cursors”.
For information available through
SHOW statements, it is possible
in many cases to obtain equivalent information by using a cursor
with an INFORMATION_SCHEMA table.
FETCH [[NEXT] FROM]cursor_nameINTOvar_name[,var_name] ...
This statement fetches the next row for the
SELECT statement associated with
the specified cursor (which must be open), and advances the
cursor pointer. If a row exists, the fetched columns are stored
in the named variables. The number of columns retrieved by the
SELECT statement must match the
number of output variables specified in the
FETCH statement.
If no more rows are available, a No Data condition occurs with
SQLSTATE value '02000'. To detect this
condition, you can set up a handler for it (or for a
NOT FOUND condition). For an example, see
Section 13.6.6, “Cursors”.
Be aware that another operation, such as a
SELECT or another FETCH,
may also cause the handler to execute by raising the same
condition. If it is necessary to distinguish which operation
raised the condition, place the operation within its own
BEGIN ...
END block so that it can be associated with its own
handler.
OPEN cursor_name
This statement opens a previously declared cursor. For an example, see Section 13.6.6, “Cursors”.
Server-side cursors are implemented in the C API using the
mysql_stmt_attr_set() function.
The same implementation is used for cursors in stored routines.
A server-side cursor enables a result set to be generated on the
server side, but not transferred to the client except for those
rows that the client requests. For example, if a client executes
a query but is only interested in the first row, the remaining
rows are not transferred.
In MySQL, a server-side cursor is materialized into an internal
temporary table. Initially, this is a MEMORY
table, but is converted to a MyISAM table
when its size exceeds the minimum value of the
max_heap_table_size and
tmp_table_size system
variables. The same restrictions apply to internal temporary
tables created to hold the result set for a cursor as for other
uses of internal temporary tables. See
Section 8.4.4, “Internal Temporary Table Use in MySQL”. One limitation of
the implementation is that for a large result set, retrieving
its rows through a cursor might be slow.
Cursors are read only; you cannot use a cursor to update rows.
UPDATE WHERE CURRENT OF and DELETE
WHERE CURRENT OF are not implemented, because
updatable cursors are not supported.
Cursors are nonholdable (not held open after a commit).
Cursors are asensitive.
Cursors are nonscrollable.
Cursors are not named. The statement handler acts as the cursor ID.
You can have open only a single cursor per prepared statement. If you need several cursors, you must prepare several statements.
You cannot use a cursor for a statement that generates a result
set if the statement is not supported in prepared mode. This
includes statements such as CHECK
TABLE, HANDLER READ, and
SHOW BINLOG EVENTS.
Conditions may arise during stored program execution that require special handling, such as exiting the current program block or continuing execution. Handlers can be defined for general conditions such as warnings or exceptions, or for specific conditions such as a particular error code. Specific conditions can be assigned names and referred to that way in handlers.
To name a condition, use the
DECLARE ...
CONDITION statement. To declare a handler, use the
DECLARE ...
HANDLER statement. See
Section 13.6.7.1, “DECLARE ... CONDITION Statement”, and
Section 13.6.7.2, “DECLARE ... HANDLER Statement”. For information about how the
server chooses handlers when a condition occurs, see
Section 13.6.7.6, “Scope Rules for Handlers”.
To raise a condition, use the
SIGNAL statement. To modify
condition information within a condition handler, use
RESIGNAL. See
Section 13.6.7.1, “DECLARE ... CONDITION Statement”, and
Section 13.6.7.2, “DECLARE ... HANDLER Statement”.
To retrieve information from the diagnostics area, use the
GET DIAGNOSTICS statement (see
Section 13.6.7.3, “GET DIAGNOSTICS Statement”). For information about the
diagnostics area, see Section 13.6.7.7, “The MySQL Diagnostics Area”.
DECLAREcondition_nameCONDITION FORcondition_valuecondition_value: {mysql_error_code| SQLSTATE [VALUE]sqlstate_value}
The DECLARE
... CONDITION statement declares a named error
condition, associating a name with a condition that needs
specific handling. The name can be referred to in a subsequent
DECLARE ...
HANDLER statement (see
Section 13.6.7.2, “DECLARE ... HANDLER Statement”).
Condition declarations must appear before cursor or handler declarations.
The condition_value for
DECLARE ...
CONDITION indicates the specific condition or class of
conditions to associate with the condition name. It can take the
following forms:
mysql_error_code: An integer
literal indicating a MySQL error code.
Do not use MySQL error code 0 because that indicates success rather than an error condition. For a list of MySQL error codes, see Section B.3.1, “Server Error Message Reference”.
SQLSTATE [VALUE] sqlstate_value:
A 5-character string literal indicating an SQLSTATE value.
Do not use SQLSTATE values that begin with
'00' because those indicate success
rather than an error condition. For a list of SQLSTATE
values, see Section B.3.1, “Server Error Message Reference”.
Condition names referred to in
SIGNAL or use
RESIGNAL statements must be
associated with SQLSTATE values, not MySQL error codes.
Using names for conditions can help make stored program code clearer. For example, this handler applies to attempts to drop a nonexistent table, but that is apparent only if you know that 1051 is the MySQL error code for “unknown table”:
DECLARE CONTINUE HANDLER FOR 1051
BEGIN
-- body of handler
END;By declaring a name for the condition, the purpose of the handler is more readily seen:
DECLARE no_such_table CONDITION FOR 1051;
DECLARE CONTINUE HANDLER FOR no_such_table
BEGIN
-- body of handler
END;Here is a named condition for the same condition, but based on the corresponding SQLSTATE value rather than the MySQL error code:
DECLARE no_such_table CONDITION FOR SQLSTATE '42S02';
DECLARE CONTINUE HANDLER FOR no_such_table
BEGIN
-- body of handler
END;
DECLAREhandler_actionHANDLER FORcondition_value[,condition_value] ...statementhandler_action: { CONTINUE | EXIT | UNDO }condition_value: {mysql_error_code| SQLSTATE [VALUE]sqlstate_value|condition_name| SQLWARNING | NOT FOUND | SQLEXCEPTION }
The DECLARE ...
HANDLER statement specifies a handler that deals with
one or more conditions. If one of these conditions occurs, the
specified statement executes.
statement can be a simple statement
such as SET , or a compound
statement written using var_name =
valueBEGIN and
END (see Section 13.6.1, “BEGIN ... END Compound Statement”).
Handler declarations must appear after variable or condition declarations.
The handler_action value indicates
what action the handler takes after execution of the handler
statement:
CONTINUE: Execution of the current
program continues.
EXIT: Execution terminates for the
BEGIN ...
END compound statement in which the handler is
declared. This is true even if the condition occurs in an
inner block.
UNDO: Not supported.
The condition_value for
DECLARE ...
HANDLER indicates the specific condition or class of
conditions that activates the handler. It can take the following
forms:
mysql_error_code: An integer
literal indicating a MySQL error code, such as 1051 to
specify “unknown table”:
DECLARE CONTINUE HANDLER FOR 1051
BEGIN
-- body of handler
END;Do not use MySQL error code 0 because that indicates success rather than an error condition. For a list of MySQL error codes, see Section B.3.1, “Server Error Message Reference”.
SQLSTATE [VALUE] sqlstate_value:
A 5-character string literal indicating an SQLSTATE value,
such as '42S01' to specify “unknown
table”:
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02'
BEGIN
-- body of handler
END;
Do not use SQLSTATE values that begin with
'00' because those indicate success
rather than an error condition. For a list of SQLSTATE
values, see Section B.3.1, “Server Error Message Reference”.
condition_name: A condition name
previously specified with
DECLARE
... CONDITION. A condition name can be associated
with a MySQL error code or SQLSTATE value. See
Section 13.6.7.1, “DECLARE ... CONDITION Statement”.
SQLWARNING: Shorthand for the class of
SQLSTATE values that begin with '01'.
DECLARE CONTINUE HANDLER FOR SQLWARNING
BEGIN
-- body of handler
END;
NOT FOUND: Shorthand for the class of
SQLSTATE values that begin with '02'.
This is relevant within the context of cursors and is used
to control what happens when a cursor reaches the end of a
data set. If no more rows are available, a No Data condition
occurs with SQLSTATE value '02000'. To
detect this condition, you can set up a handler for it or
for a NOT FOUND condition.
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN
-- body of handler
END;
For another example, see Section 13.6.6, “Cursors”. The
NOT FOUND condition also occurs for
SELECT ... INTO
statements
that retrieve no rows.
var_list
SQLEXCEPTION: Shorthand for the class of
SQLSTATE values that do not begin with
'00', '01', or
'02'.
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
BEGIN
-- body of handler
END;For information about how the server chooses handlers when a condition occurs, see Section 13.6.7.6, “Scope Rules for Handlers”.
If a condition occurs for which no handler has been declared, the action taken depends on the condition class:
For SQLEXCEPTION conditions, the stored
program terminates at the statement that raised the
condition, as if there were an EXIT
handler. If the program was called by another stored
program, the calling program handles the condition using the
handler selection rules applied to its own handlers.
For SQLWARNING conditions, the program
continues executing, as if there were a
CONTINUE handler.
For NOT FOUND conditions, if the
condition was raised normally, the action is
CONTINUE. If it was raised by
SIGNAL or
RESIGNAL, the action is
EXIT.
The following example uses a handler for SQLSTATE
'23000', which occurs for a duplicate-key error:
mysql>CREATE TABLE test.t (s1 INT, PRIMARY KEY (s1));Query OK, 0 rows affected (0.00 sec) mysql>delimiter //mysql>CREATE PROCEDURE handlerdemo ()BEGINDECLARE CONTINUE HANDLER FOR SQLSTATE '23000' SET @x2 = 1;SET @x = 1;INSERT INTO test.t VALUES (1);SET @x = 2;INSERT INTO test.t VALUES (1);SET @x = 3;END;//Query OK, 0 rows affected (0.00 sec) mysql>CALL handlerdemo()//Query OK, 0 rows affected (0.00 sec) mysql>SELECT @x//+------+ | @x | +------+ | 3 | +------+ 1 row in set (0.00 sec)
Notice that @x is 3 after
the procedure executes, which shows that execution continued to
the end of the procedure after the error occurred. If the
DECLARE ...
HANDLER statement had not been present, MySQL would
have taken the default action (EXIT) after
the second INSERT failed due to
the PRIMARY KEY constraint, and
SELECT @x would have returned
2.
To ignore a condition, declare a CONTINUE
handler for it and associate it with an empty block. For
example:
DECLARE CONTINUE HANDLER FOR SQLWARNING BEGIN END;
The scope of a block label does not include the code for
handlers declared within the block. Therefore, the statement
associated with a handler cannot use
ITERATE or
LEAVE to refer to labels for
blocks that enclose the handler declaration. Consider the
following example, where the
REPEAT block has a label of
retry:
CREATE PROCEDURE p ()
BEGIN
DECLARE i INT DEFAULT 3;
retry:
REPEAT
BEGIN
DECLARE CONTINUE HANDLER FOR SQLWARNING
BEGIN
ITERATE retry; # illegal
END;
IF i < 0 THEN
LEAVE retry; # legal
END IF;
SET i = i - 1;
END;
UNTIL FALSE END REPEAT;
END;
The retry label is in scope for the
IF statement within the block. It
is not in scope for the CONTINUE handler, so
the reference there is invalid and results in an error:
ERROR 1308 (42000): LEAVE with no matching label: retry
To avoid references to outer labels in handlers, use one of these strategies:
To leave the block, use an EXIT handler.
If no block cleanup is required, the
BEGIN ...
END handler body can be empty:
DECLARE EXIT HANDLER FOR SQLWARNING BEGIN END;
Otherwise, put the cleanup statements in the handler body:
DECLARE EXIT HANDLER FOR SQLWARNING
BEGIN
block cleanup statements
END;
To continue execution, set a status variable in a
CONTINUE handler that can be checked in
the enclosing block to determine whether the handler was
invoked. The following example uses the variable
done for this purpose:
CREATE PROCEDURE p ()
BEGIN
DECLARE i INT DEFAULT 3;
DECLARE done INT DEFAULT FALSE;
retry:
REPEAT
BEGIN
DECLARE CONTINUE HANDLER FOR SQLWARNING
BEGIN
SET done = TRUE;
END;
IF done OR i < 0 THEN
LEAVE retry;
END IF;
SET i = i - 1;
END;
UNTIL FALSE END REPEAT;
END;GET [CURRENT | STACKED] DIAGNOSTICS {
statement_information_item
[, statement_information_item] ...
| CONDITION condition_number
condition_information_item
[, condition_information_item] ...
}
statement_information_item:
target = statement_information_item_name
condition_information_item:
target = condition_information_item_name
statement_information_item_name: {
NUMBER
| ROW_COUNT
}
condition_information_item_name: {
CLASS_ORIGIN
| SUBCLASS_ORIGIN
| RETURNED_SQLSTATE
| MESSAGE_TEXT
| MYSQL_ERRNO
| CONSTRAINT_CATALOG
| CONSTRAINT_SCHEMA
| CONSTRAINT_NAME
| CATALOG_NAME
| SCHEMA_NAME
| TABLE_NAME
| COLUMN_NAME
| CURSOR_NAME
}
condition_number, target:
(see following discussion)
SQL statements produce diagnostic information that populates the
diagnostics area. The GET
DIAGNOSTICS statement enables applications to inspect
this information. (You can also use SHOW
WARNINGS or SHOW ERRORS
to see conditions or errors.)
No special privileges are required to execute
GET DIAGNOSTICS.
The keyword CURRENT means to retrieve
information from the current diagnostics area. The keyword
STACKED means to retrieve information from
the second diagnostics area, which is available only if the
current context is a condition handler. If neither keyword is
given, the default is to use the current diagnostics area.
The GET DIAGNOSTICS statement is
typically used in a handler within a stored program. It is a
MySQL extension that
GET [CURRENT]
DIAGNOSTICS is permitted outside handler context to
check the execution of any SQL statement. For example, if you
invoke the mysql client program, you can
enter these statements at the prompt:
mysql>DROP TABLE test.no_such_table;ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql>GET DIAGNOSTICS CONDITION 1@p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT;mysql>SELECT @p1, @p2;+-------+------------------------------------+ | @p1 | @p2 | +-------+------------------------------------+ | 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+
This extension applies only to the current diagnostics area. It
does not apply to the second diagnostics area because
GET STACKED DIAGNOSTICS is permitted only if
the current context is a condition handler. If that is not the
case, a GET STACKED DIAGNOSTICS when handler not
active error occurs.
For a description of the diagnostics area, see Section 13.6.7.7, “The MySQL Diagnostics Area”. Briefly, it contains two kinds of information:
Statement information, such as the number of conditions that occurred or the affected-rows count.
Condition information, such as the error code and message. If a statement raises multiple conditions, this part of the diagnostics area has a condition area for each one. If a statement raises no conditions, this part of the diagnostics area is empty.
For a statement that produces three conditions, the diagnostics area contains statement and condition information like this:
Statement information:
row count
... other statement information items ...
Condition area list:
Condition area 1:
error code for condition 1
error message for condition 1
... other condition information items ...
Condition area 2:
error code for condition 2:
error message for condition 2
... other condition information items ...
Condition area 3:
error code for condition 3
error message for condition 3
... other condition information items ...
GET DIAGNOSTICS can obtain either
statement or condition information, but not both in the same
statement:
To obtain statement information, retrieve the desired
statement items into target variables. This instance of
GET DIAGNOSTICS assigns the
number of available conditions and the rows-affected count
to the user variables @p1 and
@p2:
GET DIAGNOSTICS @p1 = NUMBER, @p2 = ROW_COUNT;
To obtain condition information, specify the condition
number and retrieve the desired condition items into target
variables. This instance of GET
DIAGNOSTICS assigns the SQLSTATE value and error
message to the user variables @p3 and
@p4:
GET DIAGNOSTICS CONDITION 1 @p3 = RETURNED_SQLSTATE, @p4 = MESSAGE_TEXT;
The retrieval list specifies one or more
assignments,
separated by commas. Each assignment names a target variable and
either a
target =
item_namestatement_information_item_name or
condition_information_item_name
designator, depending on whether the statement retrieves
statement or condition information.
Valid target designators for storing
item information can be stored procedure or function parameters,
stored program local variables declared with
DECLARE, or user-defined
variables.
Valid condition_number designators
can be stored procedure or function parameters, stored program
local variables declared with
DECLARE, user-defined variables,
system variables, or literals. A character literal may include a
_charset introducer. A warning occurs
if the condition number is not in the range from 1 to the number
of condition areas that have information. In this case, the
warning is added to the diagnostics area without clearing it.
When a condition occurs, MySQL does not populate all condition
items recognized by GET
DIAGNOSTICS. For example:
mysql>GET DIAGNOSTICS CONDITION 1@p5 = SCHEMA_NAME, @p6 = TABLE_NAME;mysql>SELECT @p5, @p6;+------+------+ | @p5 | @p6 | +------+------+ | | | +------+------+
In standard SQL, if there are multiple conditions, the first
condition relates to the SQLSTATE value
returned for the previous SQL statement. In MySQL, this is not
guaranteed. To get the main error, you cannot do this:
GET DIAGNOSTICS CONDITION 1 @errno = MYSQL_ERRNO;
Instead, retrieve the condition count first, then use it to specify which condition number to inspect:
GET DIAGNOSTICS @cno = NUMBER; GET DIAGNOSTICS CONDITION @cno @errno = MYSQL_ERRNO;
For information about permissible statement and condition information items, and which ones are populated when a condition occurs, see Diagnostics Area Information Items.
Here is an example that uses GET
DIAGNOSTICS and an exception handler in stored
procedure context to assess the outcome of an insert operation.
If the insert was successful, the procedure uses
GET DIAGNOSTICS to get the
rows-affected count. This shows that you can use
GET DIAGNOSTICS multiple times to
retrieve information about a statement as long as the current
diagnostics area has not been cleared.
CREATE PROCEDURE do_insert(value INT)
BEGIN
-- Declare variables to hold diagnostics area information
DECLARE code CHAR(5) DEFAULT '00000';
DECLARE msg TEXT;
DECLARE nrows INT;
DECLARE result TEXT;
-- Declare exception handler for failed insert
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
BEGIN
GET DIAGNOSTICS CONDITION 1
code = RETURNED_SQLSTATE, msg = MESSAGE_TEXT;
END;
-- Perform the insert
INSERT INTO t1 (int_col) VALUES(value);
-- Check whether the insert was successful
IF code = '00000' THEN
GET DIAGNOSTICS nrows = ROW_COUNT;
SET result = CONCAT('insert succeeded, row count = ',nrows);
ELSE
SET result = CONCAT('insert failed, error = ',code,', message = ',msg);
END IF;
-- Say what happened
SELECT result;
END;
Suppose that t1.int_col is an integer column
that is declared as NOT NULL. The procedure
produces these results when invoked to insert
non-NULL and NULL values,
respectively:
mysql>CALL do_insert(1);+---------------------------------+ | result | +---------------------------------+ | insert succeeded, row count = 1 | +---------------------------------+ mysql>CALL do_insert(NULL);+-------------------------------------------------------------------------+ | result | +-------------------------------------------------------------------------+ | insert failed, error = 23000, message = Column 'int_col' cannot be null | +-------------------------------------------------------------------------+
When a condition handler activates, a push to the diagnostics area stack occurs:
The first (current) diagnostics area becomes the second (stacked) diagnostics area and a new current diagnostics area is created as a copy of it.
GET
[CURRENT] DIAGNOSTICS and
GET STACKED
DIAGNOSTICS can be used within the handler to
access the contents of the current and stacked diagnostics
areas.
Initially, both diagnostics areas return the same result, so it is possible to get information from the current diagnostics area about the condition that activated the handler, as long as you execute no statements within the handler that change its current diagnostics area.
However, statements executing within the handler can modify the current diagnostics area, clearing and setting its contents according to the normal rules (see How the Diagnostics Area is Cleared and Populated).
A more reliable way to obtain information about the
handler-activating condition is to use the stacked
diagnostics area, which cannot be modified by statements
executing within the handler except
RESIGNAL. For information
about when the current diagnostics area is set and cleared,
see Section 13.6.7.7, “The MySQL Diagnostics Area”.
The next example shows how GET STACKED
DIAGNOSTICS can be used within a handler to obtain
information about the handled exception, even after the current
diagnostics area has been modified by handler statements.
Within a stored procedure p(), we attempt to
insert two values into a table that contains a TEXT NOT
NULL column. The first value is a
non-NULL string and the second is
NULL. The column prohibits
NULL values, so the first insert succeeds but
the second causes an exception. The procedure includes an
exception handler that maps attempts to insert
NULL into inserts of the empty string:
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (c1 TEXT NOT NULL);
DROP PROCEDURE IF EXISTS p;
delimiter //
CREATE PROCEDURE p ()
BEGIN
-- Declare variables to hold diagnostics area information
DECLARE errcount INT;
DECLARE errno INT;
DECLARE msg TEXT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
-- Here the current DA is nonempty because no prior statements
-- executing within the handler have cleared it
GET CURRENT DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'current DA before mapped insert' AS op, errno, msg;
GET STACKED DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'stacked DA before mapped insert' AS op, errno, msg;
-- Map attempted NULL insert to empty string insert
INSERT INTO t1 (c1) VALUES('');
-- Here the current DA should be empty (if the INSERT succeeded),
-- so check whether there are conditions before attempting to
-- obtain condition information
GET CURRENT DIAGNOSTICS errcount = NUMBER;
IF errcount = 0
THEN
SELECT 'mapped insert succeeded, current DA is empty' AS op;
ELSE
GET CURRENT DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'current DA after mapped insert' AS op, errno, msg;
END IF ;
GET STACKED DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'stacked DA after mapped insert' AS op, errno, msg;
END;
INSERT INTO t1 (c1) VALUES('string 1');
INSERT INTO t1 (c1) VALUES(NULL);
END;
//
delimiter ;
CALL p();
SELECT * FROM t1;When the handler activates, a copy of the current diagnostics area is pushed to the diagnostics area stack. The handler first displays the contents of the current and stacked diagnostics areas, which are both the same initially:
+---------------------------------+-------+----------------------------+ | op | errno | msg | +---------------------------------+-------+----------------------------+ | current DA before mapped insert | 1048 | Column 'c1' cannot be null | +---------------------------------+-------+----------------------------+ +---------------------------------+-------+----------------------------+ | op | errno | msg | +---------------------------------+-------+----------------------------+ | stacked DA before mapped insert | 1048 | Column 'c1' cannot be null | +---------------------------------+-------+----------------------------+
Statements executing after the GET
DIAGNOSTICS statements may reset the current
diagnostics area. statements may reset the current diagnostics
area. For example, the handler maps the NULL
insert to an empty-string insert and displays the result. The
new insert succeeds and clears the current diagnostics area, but
the stacked diagnostics area remains unchanged and still
contains information about the condition that activated the
handler:
+----------------------------------------------+ | op | +----------------------------------------------+ | mapped insert succeeded, current DA is empty | +----------------------------------------------+ +--------------------------------+-------+----------------------------+ | op | errno | msg | +--------------------------------+-------+----------------------------+ | stacked DA after mapped insert | 1048 | Column 'c1' cannot be null | +--------------------------------+-------+----------------------------+
When the condition handler ends, its current diagnostics area is popped from the stack and the stacked diagnostics area becomes the current diagnostics area in the stored procedure.
After the procedure returns, the table contains two rows. The
empty row results from the attempt to insert
NULL that was mapped to an empty-string
insert:
+----------+ | c1 | +----------+ | string 1 | | | +----------+
In the preceding example, the first two GET
DIAGNOSTICS statements within the condition handler
that retrieve information from the current and stacked
diagnostics areas return the same values. This will not be the
case if statements that reset the current diagnostics area
execute earlier within the handler. Suppose that
p() is rewritten to place the
DECLARE statements within the
handler definition rather than preceding it:
CREATE PROCEDURE p ()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
-- Declare variables to hold diagnostics area information
DECLARE errcount INT;
DECLARE errno INT;
DECLARE msg TEXT;
GET CURRENT DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'current DA before mapped insert' AS op, errno, msg;
GET STACKED DIAGNOSTICS CONDITION 1
errno = MYSQL_ERRNO, msg = MESSAGE_TEXT;
SELECT 'stacked DA before mapped insert' AS op, errno, msg;
...In this case, the result is version dependent:
Before MySQL 5.7.2, DECLARE
does not change the current diagnostics area, so the first
two GET DIAGNOSTICS
statements return the same result, just as in the original
version of p().
In MySQL 5.7.2, work was done to ensure that all
nondiagnostic statements populate the diagnostics area, per
the SQL standard. DECLARE is
one of them, so in 5.7.2 and higher,
DECLARE statements executing
at the beginning of the handler clear the current
diagnostics area and the GET
DIAGNOSTICS statements produce different results:
+---------------------------------+-------+------+ | op | errno | msg | +---------------------------------+-------+------+ | current DA before mapped insert | NULL | NULL | +---------------------------------+-------+------+ +---------------------------------+-------+----------------------------+ | op | errno | msg | +---------------------------------+-------+----------------------------+ | stacked DA before mapped insert | 1048 | Column 'c1' cannot be null | +---------------------------------+-------+----------------------------+
To avoid this issue within a condition handler when seeking to obtain information about the condition that activated the handler, be sure to access the stacked diagnostics area, not the current diagnostics area.
RESIGNAL [condition_value] [SETsignal_information_item[,signal_information_item] ...]condition_value: { SQLSTATE [VALUE]sqlstate_value|condition_name}signal_information_item:condition_information_item_name=simple_value_specificationcondition_information_item_name: { CLASS_ORIGIN | SUBCLASS_ORIGIN | MESSAGE_TEXT | MYSQL_ERRNO | CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | CATALOG_NAME | SCHEMA_NAME | TABLE_NAME | COLUMN_NAME | CURSOR_NAME }condition_name,simple_value_specification: (see following discussion)
RESIGNAL passes on the error
condition information that is available during execution of a
condition handler within a compound statement inside a stored
procedure or function, trigger, or event.
RESIGNAL may change some or all
information before passing it on.
RESIGNAL is related to
SIGNAL, but instead of
originating a condition as SIGNAL
does, RESIGNAL relays existing
condition information, possibly after modifying it.
RESIGNAL makes it possible to
both handle an error and return the error information.
Otherwise, by executing an SQL statement within the handler,
information that caused the handler's activation is destroyed.
RESIGNAL also can make some
procedures shorter if a given handler can handle part of a
situation, then pass the condition “up the line” to
another handler.
No privileges are required to execute the
RESIGNAL statement.
All forms of RESIGNAL require
that the current context be a condition handler. Otherwise,
RESIGNAL is illegal and a
RESIGNAL when handler not active error
occurs.
To retrieve information from the diagnostics area, use the
GET DIAGNOSTICS statement (see
Section 13.6.7.3, “GET DIAGNOSTICS Statement”). For information about the
diagnostics area, see Section 13.6.7.7, “The MySQL Diagnostics Area”.
For condition_value and
signal_information_item, the
definitions and rules are the same for
RESIGNAL as for
SIGNAL. For example, the
condition_value can be an
SQLSTATE value, and the value can indicate
errors, warnings, or “not found.” For additional
information, see Section 13.6.7.5, “SIGNAL Statement”.
The RESIGNAL statement takes
condition_value and
SET clauses, both of which are optional.
This leads to several possible uses:
These use cases all cause changes to the diagnostics and condition areas:
A diagnostics area contains one or more condition areas.
A condition area contains condition information items,
such as the SQLSTATE value,
MYSQL_ERRNO, or
MESSAGE_TEXT.
There is a stack of diagnostics areas. When a handler takes control, it pushes a diagnostics area to the top of the stack, so there are two diagnostics areas during handler execution:
The first (current) diagnostics area, which starts as a copy of the last diagnostics area, but will be overwritten by the first statement in the handler that changes the current diagnostics area.
The last (stacked) diagnostics area, which has the condition areas that were set up before the handler took control.
The maximum number of condition areas in a diagnostics area is
determined by the value of the
max_error_count system
variable. See
Diagnostics Area-Related System Variables.
A simple RESIGNAL alone means
“pass on the error with no change.” It restores
the last diagnostics area and makes it the current diagnostics
area. That is, it “pops” the diagnostics area
stack.
Within a condition handler that catches a condition, one use
for RESIGNAL alone is to
perform some other actions, and then pass on without change
the original condition information (the information that
existed before entry into the handler).
Example:
DROP TABLE IF EXISTS xx;
delimiter //
CREATE PROCEDURE p ()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
SET @error_count = @error_count + 1;
IF @a = 0 THEN RESIGNAL; END IF;
END;
DROP TABLE xx;
END//
delimiter ;
SET @error_count = 0;
SET @a = 0;
CALL p();
Suppose that the DROP TABLE xx statement
fails. The diagnostics area stack looks like this:
DA 1. ERROR 1051 (42S02): Unknown table 'xx'
Then execution enters the EXIT handler. It
starts by pushing a diagnostics area to the top of the stack,
which now looks like this:
DA 1. ERROR 1051 (42S02): Unknown table 'xx' DA 2. ERROR 1051 (42S02): Unknown table 'xx'
At this point, the contents of the first (current) and second (stacked) diagnostics areas are the same. The first diagnostics area may be modified by statements executing subsequently within the handler.
Usually a procedure statement clears the first diagnostics
area. BEGIN is an exception, it does not
clear, it does nothing. SET is not an
exception, it clears, performs the operation, and produces a
result of “success.” The diagnostics area stack
now looks like this:
DA 1. ERROR 0000 (00000): Successful operation DA 2. ERROR 1051 (42S02): Unknown table 'xx'
At this point, if @a = 0,
RESIGNAL pops the diagnostics
area stack, which now looks like this:
DA 1. ERROR 1051 (42S02): Unknown table 'xx'
And that is what the caller sees.
If @a is not 0, the handler simply ends,
which means that there is no more use for the current
diagnostics area (it has been “handled”), so it
can be thrown away, causing the stacked diagnostics area to
become the current diagnostics area again. The diagnostics
area stack looks like this:
DA 1. ERROR 0000 (00000): Successful operation
The details make it look complex, but the end result is quite useful: Handlers can execute without destroying information about the condition that caused activation of the handler.
RESIGNAL with a
SET clause provides new signal information,
so the statement means “pass on the error with
changes”:
RESIGNAL SETsignal_information_item[,signal_information_item] ...;
As with RESIGNAL alone, the
idea is to pop the diagnostics area stack so that the original
information will go out. Unlike
RESIGNAL alone, anything
specified in the SET clause changes.
Example:
DROP TABLE IF EXISTS xx;
delimiter //
CREATE PROCEDURE p ()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
SET @error_count = @error_count + 1;
IF @a = 0 THEN RESIGNAL SET MYSQL_ERRNO = 5; END IF;
END;
DROP TABLE xx;
END//
delimiter ;
SET @error_count = 0;
SET @a = 0;
CALL p();
Remember from the previous discussion that
RESIGNAL alone results in a
diagnostics area stack like this:
DA 1. ERROR 1051 (42S02): Unknown table 'xx'
The RESIGNAL SET MYSQL_ERRNO = 5 statement
results in this stack instead, which is what the caller sees:
DA 1. ERROR 5 (42S02): Unknown table 'xx'
In other words, it changes the error number, and nothing else.
The RESIGNAL statement can
change any or all of the signal information items, making the
first condition area of the diagnostics area look quite
different.
RESIGNAL with a condition value
means “push a condition into the current diagnostics
area.” If the SET clause is present,
it also changes the error information.
RESIGNALcondition_value[SETsignal_information_item[,signal_information_item] ...];
This form of RESIGNAL restores
the last diagnostics area and makes it the current diagnostics
area. That is, it “pops” the diagnostics area
stack, which is the same as what a simple
RESIGNAL alone would do.
However, it also changes the diagnostics area depending on the
condition value or signal information.
Example:
DROP TABLE IF EXISTS xx;
delimiter //
CREATE PROCEDURE p ()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
SET @error_count = @error_count + 1;
IF @a = 0 THEN RESIGNAL SQLSTATE '45000' SET MYSQL_ERRNO=5; END IF;
END;
DROP TABLE xx;
END//
delimiter ;
SET @error_count = 0;
SET @a = 0;
SET @@max_error_count = 2;
CALL p();
SHOW ERRORS;
This is similar to the previous example, and the effects are
the same, except that if
RESIGNAL happens, the current
condition area looks different at the end. (The reason the
condition adds to rather than replaces the existing condition
is the use of a condition value.)
The RESIGNAL statement includes
a condition value (SQLSTATE '45000'), so it
adds a new condition area, resulting in a diagnostics area
stack that looks like this:
DA 1. (condition 2) ERROR 1051 (42S02): Unknown table 'xx'
(condition 1) ERROR 5 (45000) Unknown table 'xx'
The result of CALL
p() and SHOW ERRORS
for this example is:
mysql>CALL p();ERROR 5 (45000): Unknown table 'xx' mysql>SHOW ERRORS;+-------+------+----------------------------------+ | Level | Code | Message | +-------+------+----------------------------------+ | Error | 1051 | Unknown table 'xx' | | Error | 5 | Unknown table 'xx' | +-------+------+----------------------------------+
All forms of RESIGNAL require
that the current context be a condition handler. Otherwise,
RESIGNAL is illegal and a
RESIGNAL when handler not active error
occurs. For example:
mysql>CREATE PROCEDURE p () RESIGNAL;Query OK, 0 rows affected (0.00 sec) mysql>CALL p();ERROR 1645 (0K000): RESIGNAL when handler not active
Here is a more difficult example:
delimiter // CREATE FUNCTION f () RETURNS INT BEGIN RESIGNAL; RETURN 5; END// CREATE PROCEDURE p () BEGIN DECLARE EXIT HANDLER FOR SQLEXCEPTION SET @a=f(); SIGNAL SQLSTATE '55555'; END// delimiter ; CALL p();
RESIGNAL occurs within the
stored function f(). Although
f() itself is invoked within the context of
the EXIT handler, execution within
f() has its own context, which is not
handler context. Thus, RESIGNAL within
f() results in a “handler not
active” error.
SIGNALcondition_value[SETsignal_information_item[,signal_information_item] ...]condition_value: { SQLSTATE [VALUE]sqlstate_value|condition_name}signal_information_item:condition_information_item_name=simple_value_specificationcondition_information_item_name: { CLASS_ORIGIN | SUBCLASS_ORIGIN | MESSAGE_TEXT | MYSQL_ERRNO | CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | CATALOG_NAME | SCHEMA_NAME | TABLE_NAME | COLUMN_NAME | CURSOR_NAME }condition_name,simple_value_specification: (see following discussion)
SIGNAL is the way to
“return” an error.
SIGNAL provides error information
to a handler, to an outer portion of the application, or to the
client. Also, it provides control over the error's
characteristics (error number, SQLSTATE
value, message). Without SIGNAL,
it is necessary to resort to workarounds such as deliberately
referring to a nonexistent table to cause a routine to return an
error.
No privileges are required to execute the
SIGNAL statement.
To retrieve information from the diagnostics area, use the
GET DIAGNOSTICS statement (see
Section 13.6.7.3, “GET DIAGNOSTICS Statement”). For information about the
diagnostics area, see Section 13.6.7.7, “The MySQL Diagnostics Area”.
The condition_value in a
SIGNAL statement indicates the
error value to be returned. It can be an
SQLSTATE value (a 5-character string
literal) or a condition_name that
refers to a named condition previously defined with
DECLARE ...
CONDITION (see Section 13.6.7.1, “DECLARE ... CONDITION Statement”).
An SQLSTATE value can indicate errors,
warnings, or “not found.” The first two
characters of the value indicate its error class, as discussed
in Signal Condition Information Items. Some
signal values cause statement termination; see
Effect of Signals on Handlers, Cursors, and Statements.
The SQLSTATE value for a
SIGNAL statement should not
start with '00' because such values
indicate success and are not valid for signaling an error.
This is true whether the SQLSTATE value is
specified directly in the
SIGNAL statement or in a named
condition referred to in the statement. If the value is
invalid, a Bad SQLSTATE error occurs.
To signal a generic SQLSTATE value, use
'45000', which means “unhandled
user-defined exception.”
The SIGNAL statement optionally
includes a SET clause that contains
multiple signal items, in a list of
condition_information_item_name =
simple_value_specification
assignments, separated by commas.
Each
condition_information_item_name may
be specified only once in the SET clause.
Otherwise, a Duplicate condition information
item error occurs.
Valid simple_value_specification
designators can be specified using stored procedure or
function parameters, stored program local variables declared
with DECLARE, user-defined
variables, system variables, or literals. A character literal
may include a _charset introducer.
For information about permissible
condition_information_item_name
values, see
Signal Condition Information Items.
The following procedure signals an error or warning depending
on the value of pval, its input parameter:
CREATE PROCEDURE p (pval INT)
BEGIN
DECLARE specialty CONDITION FOR SQLSTATE '45000';
IF pval = 0 THEN
SIGNAL SQLSTATE '01000';
ELSEIF pval = 1 THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'An error occurred';
ELSEIF pval = 2 THEN
SIGNAL specialty
SET MESSAGE_TEXT = 'An error occurred';
ELSE
SIGNAL SQLSTATE '01000'
SET MESSAGE_TEXT = 'A warning occurred', MYSQL_ERRNO = 1000;
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'An error occurred', MYSQL_ERRNO = 1001;
END IF;
END;
If pval is 0, p()
signals a warning because SQLSTATE values
that begin with '01' are signals in the
warning class. The warning does not terminate the procedure,
and can be seen with SHOW
WARNINGS after the procedure returns.
If pval is 1, p()
signals an error and sets the MESSAGE_TEXT
condition information item. The error terminates the
procedure, and the text is returned with the error
information.
If pval is 2, the same error is signaled,
although the SQLSTATE value is specified
using a named condition in this case.
If pval is anything else,
p() first signals a warning and sets the
message text and error number condition information items.
This warning does not terminate the procedure, so execution
continues and p() then signals an error.
The error does terminate the procedure. The message text and
error number set by the warning are replaced by the values set
by the error, which are returned with the error information.
SIGNAL is typically used within
stored programs, but it is a MySQL extension that it is
permitted outside handler context. For example, if you invoke
the mysql client program, you can enter any
of these statements at the prompt:
SIGNAL SQLSTATE '77777'; CREATE TRIGGER t_bi BEFORE INSERT ON t FOR EACH ROW SIGNAL SQLSTATE '77777'; CREATE EVENT e ON SCHEDULE EVERY 1 SECOND DO SIGNAL SQLSTATE '77777';
SIGNAL executes according to
the following rules:
If the SIGNAL statement
indicates a particular SQLSTATE value, that
value is used to signal the condition specified. Example:
CREATE PROCEDURE p (divisor INT)
BEGIN
IF divisor = 0 THEN
SIGNAL SQLSTATE '22012';
END IF;
END;
If the SIGNAL statement uses a
named condition, the condition must be declared in some scope
that applies to the SIGNAL
statement, and must be defined using an
SQLSTATE value, not a MySQL error number.
Example:
CREATE PROCEDURE p (divisor INT)
BEGIN
DECLARE divide_by_zero CONDITION FOR SQLSTATE '22012';
IF divisor = 0 THEN
SIGNAL divide_by_zero;
END IF;
END;
If the named condition does not exist in the scope of the
SIGNAL statement, an
Undefined CONDITION error occurs.
If SIGNAL refers to a named
condition that is defined with a MySQL error number rather
than an SQLSTATE value, a
SIGNAL/RESIGNAL can only use a CONDITION defined with
SQLSTATE error occurs. The following statements
cause that error because the named condition is associated
with a MySQL error number:
DECLARE no_such_table CONDITION FOR 1051; SIGNAL no_such_table;
If a condition with a given name is declared multiple times in different scopes, the declaration with the most local scope applies. Consider the following procedure:
CREATE PROCEDURE p (divisor INT)
BEGIN
DECLARE my_error CONDITION FOR SQLSTATE '45000';
IF divisor = 0 THEN
BEGIN
DECLARE my_error CONDITION FOR SQLSTATE '22012';
SIGNAL my_error;
END;
END IF;
SIGNAL my_error;
END;
If divisor is 0, the first
SIGNAL statement executes. The
innermost my_error condition declaration
applies, raising SQLSTATE
'22012'.
If divisor is not 0, the second
SIGNAL statement executes. The
outermost my_error condition declaration
applies, raising SQLSTATE
'45000'.
For information about how the server chooses handlers when a condition occurs, see Section 13.6.7.6, “Scope Rules for Handlers”.
Signals can be raised within exception handlers:
CREATE PROCEDURE p ()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
SIGNAL SQLSTATE VALUE '99999'
SET MESSAGE_TEXT = 'An error occurred';
END;
DROP TABLE no_such_table;
END;
CALL p() reaches the
DROP TABLE statement. There is
no table named no_such_table, so the error
handler is activated. The error handler destroys the original
error (“no such table”) and makes a new error
with SQLSTATE '99999'
and message An error occurred.
The following table lists the names of diagnostics area
condition information items that can be set in a
SIGNAL (or
RESIGNAL) statement. All items
are standard SQL except MYSQL_ERRNO, which
is a MySQL extension. For more information about these items
see Section 13.6.7.7, “The MySQL Diagnostics Area”.
Item Name Definition --------- ---------- CLASS_ORIGIN VARCHAR(64) SUBCLASS_ORIGIN VARCHAR(64) CONSTRAINT_CATALOG VARCHAR(64) CONSTRAINT_SCHEMA VARCHAR(64) CONSTRAINT_NAME VARCHAR(64) CATALOG_NAME VARCHAR(64) SCHEMA_NAME VARCHAR(64) TABLE_NAME VARCHAR(64) COLUMN_NAME VARCHAR(64) CURSOR_NAME VARCHAR(64) MESSAGE_TEXT VARCHAR(128) MYSQL_ERRNO SMALLINT UNSIGNED
The character set for character items is UTF-8.
It is illegal to assign NULL to a condition
information item in a SIGNAL
statement.
A SIGNAL statement always
specifies an SQLSTATE value, either
directly, or indirectly by referring to a named condition
defined with an SQLSTATE value. The first
two characters of an SQLSTATE value are its
class, and the class determines the default value for the
condition information items:
Class = '00' (success)
Illegal. SQLSTATE values that begin
with '00' indicate success and are not
valid for SIGNAL.
Class = '01' (warning)
MESSAGE_TEXT = 'Unhandled user-defined warning condition';
MYSQL_ERRNO = ER_SIGNAL_WARN
Class = '02' (not found)
MESSAGE_TEXT = 'Unhandled user-defined not found condition';
MYSQL_ERRNO = ER_SIGNAL_NOT_FOUND
Class > '02' (exception)
MESSAGE_TEXT = 'Unhandled user-defined exception condition';
MYSQL_ERRNO = ER_SIGNAL_EXCEPTION
For legal classes, the other condition information items are set as follows:
CLASS_ORIGIN = SUBCLASS_ORIGIN = ''; CONSTRAINT_CATALOG = CONSTRAINT_SCHEMA = CONSTRAINT_NAME = ''; CATALOG_NAME = SCHEMA_NAME = TABLE_NAME = COLUMN_NAME = ''; CURSOR_NAME = '';
The error values that are accessible after
SIGNAL executes are the
SQLSTATE value raised by the
SIGNAL statement and the
MESSAGE_TEXT and
MYSQL_ERRNO items. These values are
available from the C API:
mysql_sqlstate() returns
the SQLSTATE value.
mysql_errno() returns the
MYSQL_ERRNO value.
mysql_error() returns the
MESSAGE_TEXT value.
At the SQL level, the output from SHOW
WARNINGS and SHOW
ERRORS indicates the MYSQL_ERRNO
and MESSAGE_TEXT values in the
Code and Message
columns.
To retrieve information from the diagnostics area, use the
GET DIAGNOSTICS statement (see
Section 13.6.7.3, “GET DIAGNOSTICS Statement”). For information about the
diagnostics area, see Section 13.6.7.7, “The MySQL Diagnostics Area”.
Signals have different effects on statement execution
depending on the signal class. The class determines how severe
an error is. MySQL ignores the value of the
sql_mode system variable; in
particular, strict SQL mode does not matter. MySQL also
ignores IGNORE: The intent of
SIGNAL is to raise a
user-generated error explicitly, so a signal is never ignored.
In the following descriptions, “unhandled” means
that no handler for the signaled SQLSTATE
value has been defined with
DECLARE ...
HANDLER.
Class = '00' (success)
Illegal. SQLSTATE values that begin
with '00' indicate success and are not
valid for SIGNAL.
Class = '01' (warning)
The value of the
warning_count system
variable goes up. SHOW
WARNINGS shows the signal.
SQLWARNING handlers catch the signal.
Warnings cannot be returned from stored functions because
the RETURN statement that
causes the function to return clears the diagnostic area.
The statement thus clears any warnings that may have been
present there (and resets
warning_count to 0).
Class = '02' (not found)
NOT FOUND handlers catch the signal.
There is no effect on cursors. If the signal is unhandled
in a stored function, statements end.
Class > '02' (exception)
SQLEXCEPTION handlers catch the signal.
If the signal is unhandled in a stored function,
statements end.
Class = '40'
Treated as an ordinary exception.
A stored program may include handlers to be invoked when certain conditions occur within the program. The applicability of each handler depends on its location within the program definition and on the condition or conditions that it handles:
A handler declared in a
BEGIN ...
END block is in scope only for the SQL statements
following the handler declarations in the block. If the
handler itself raises a condition, it cannot handle that
condition, nor can any other handlers declared in the block.
In the following example, handlers H1 and
H2 are in scope for conditions raised by
statements stmt1 and
stmt2. But neither
H1 nor H2 are in scope
for conditions raised in the body of H1
or H2.
BEGIN -- outer block DECLARE EXIT HANDLER FOR ...; -- handler H1 DECLARE EXIT HANDLER FOR ...; -- handler H2stmt1;stmt2; END;
A handler is in scope only for the block in which it is
declared, and cannot be activated for conditions occurring
outside that block. In the following example, handler
H1 is in scope for
stmt1 in the inner block, but not
for stmt2 in the outer block:
BEGIN -- outer block
BEGIN -- inner block
DECLARE EXIT HANDLER FOR ...; -- handler H1
stmt1;
END;
stmt2;
END;
A handler can be specific or general. A specific handler is
for a MySQL error code, SQLSTATE value,
or condition name. A general handler is for a condition in
the SQLWARNING,
SQLEXCEPTION, or NOT
FOUND class. Condition specificity is related to
condition precedence, as described later.
Multiple handlers can be declared in different scopes and with
different specificities. For example, there might be a specific
MySQL error code handler in an outer block, and a general
SQLWARNING handler in an inner block. Or
there might be handlers for a specific MySQL error code and the
general SQLWARNING class in the same block.
Whether a handler is activated depends not only on its own scope
and condition value, but on what other handlers are present.
When a condition occurs in a stored program, the server searches
for applicable handlers in the current scope (current
BEGIN ...
END block). If there are no applicable handlers, the
search continues outward with the handlers in each successive
containing scope (block). When the server finds one or more
applicable handlers at a given scope, it chooses among them
based on condition precedence:
A MySQL error code handler takes precedence over an
SQLSTATE value handler.
An SQLSTATE value handler takes
precedence over general SQLWARNING,
SQLEXCEPTION, or NOT
FOUND handlers.
An SQLEXCEPTION handler takes precedence
over an SQLWARNING handler.
It is possible to have several applicable handlers with the same precedence. For example, a statement could generate multiple warnings with different error codes, for each of which an error-specific handler exists. In this case, the choice of which handler the server activates is nondeterministic, and may change depending on the circumstances under which the condition occurs.
One implication of the handler selection rules is that if multiple applicable handlers occur in different scopes, handlers with the most local scope take precedence over handlers in outer scopes, even over those for more specific conditions.
If there is no appropriate handler when a condition occurs, the action taken depends on the class of the condition:
For SQLEXCEPTION conditions, the stored
program terminates at the statement that raised the
condition, as if there were an EXIT
handler. If the program was called by another stored
program, the calling program handles the condition using the
handler selection rules applied to its own handlers.
For SQLWARNING conditions, the program
continues executing, as if there were a
CONTINUE handler.
For NOT FOUND conditions, if the
condition was raised normally, the action is
CONTINUE. If it was raised by
SIGNAL or
RESIGNAL, the action is
EXIT.
The following examples demonstrate how MySQL applies the handler selection rules.
This procedure contains two handlers, one for the specific
SQLSTATE value ('42S02')
that occurs for attempts to drop a nonexistent table, and one
for the general SQLEXCEPTION class:
CREATE PROCEDURE p1()
BEGIN
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02'
SELECT 'SQLSTATE handler was activated' AS msg;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT 'SQLEXCEPTION handler was activated' AS msg;
DROP TABLE test.t;
END;
Both handlers are declared in the same block and have the same
scope. However, SQLSTATE handlers take
precedence over SQLEXCEPTION handlers, so if
the table t is nonexistent, the
DROP TABLE statement raises a
condition that activates the SQLSTATE
handler:
mysql> CALL p1();
+--------------------------------+
| msg |
+--------------------------------+
| SQLSTATE handler was activated |
+--------------------------------+
This procedure contains the same two handlers. But this time,
the DROP TABLE statement and
SQLEXCEPTION handler are in an inner block
relative to the SQLSTATE handler:
CREATE PROCEDURE p2()
BEGIN -- outer block
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02'
SELECT 'SQLSTATE handler was activated' AS msg;
BEGIN -- inner block
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT 'SQLEXCEPTION handler was activated' AS msg;
DROP TABLE test.t; -- occurs within inner block
END;
END;
In this case, the handler that is more local to where the
condition occurs takes precedence. The
SQLEXCEPTION handler activates, even though
it is more general than the SQLSTATE handler:
mysql> CALL p2();
+------------------------------------+
| msg |
+------------------------------------+
| SQLEXCEPTION handler was activated |
+------------------------------------+
In this procedure, one of the handlers is declared in a block
inner to the scope of the DROP
TABLE statement:
CREATE PROCEDURE p3()
BEGIN -- outer block
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT 'SQLEXCEPTION handler was activated' AS msg;
BEGIN -- inner block
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02'
SELECT 'SQLSTATE handler was activated' AS msg;
END;
DROP TABLE test.t; -- occurs within outer block
END;
Only the SQLEXCEPTION handler applies because
the other one is not in scope for the condition raised by the
DROP TABLE:
mysql> CALL p3();
+------------------------------------+
| msg |
+------------------------------------+
| SQLEXCEPTION handler was activated |
+------------------------------------+
In this procedure, both handlers are declared in a block inner
to the scope of the DROP TABLE
statement:
CREATE PROCEDURE p4()
BEGIN -- outer block
BEGIN -- inner block
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT 'SQLEXCEPTION handler was activated' AS msg;
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02'
SELECT 'SQLSTATE handler was activated' AS msg;
END;
DROP TABLE test.t; -- occurs within outer block
END;
Neither handler applies because they are not in scope for the
DROP TABLE. The condition raised
by the statement goes unhandled and terminates the procedure
with an error:
mysql> CALL p4();
ERROR 1051 (42S02): Unknown table 'test.t'
SQL statements produce diagnostic information that populates the
diagnostics area. Standard SQL has a diagnostics area stack,
containing a diagnostics area for each nested execution context.
Standard SQL also supports
GET STACKED
DIAGNOSTICS syntax for referring to the second
diagnostics area during condition handler execution.
The following discussion describes the structure of the diagnostics area in MySQL, the information items recognized by MySQL, how statements clear and set the diagnostics area, and how diagnostics areas are pushed to and popped from the stack.
The diagnostics area contains two kinds of information:
Statement information, such as the number of conditions that occurred or the affected-rows count.
Condition information, such as the error code and message. If a statement raises multiple conditions, this part of the diagnostics area has a condition area for each one. If a statement raises no conditions, this part of the diagnostics area is empty.
For a statement that produces three conditions, the diagnostics area contains statement and condition information like this:
Statement information:
row count
... other statement information items ...
Condition area list:
Condition area 1:
error code for condition 1
error message for condition 1
... other condition information items ...
Condition area 2:
error code for condition 2:
error message for condition 2
... other condition information items ...
Condition area 3:
error code for condition 3
error message for condition 3
... other condition information items ...
The diagnostics area contains statement and condition
information items. Numeric items are integers. The character
set for character items is UTF-8. No item can be
NULL. If a statement or condition item is
not set by a statement that populates the diagnostics area,
its value is 0 or the empty string, depending on the item data
type.
The statement information part of the diagnostics area contains these items:
NUMBER: An integer indicating the
number of condition areas that have information.
ROW_COUNT: An integer indicating the
number of rows affected by the statement.
ROW_COUNT has the same value as the
ROW_COUNT() function (see
Section 12.15, “Information Functions”).
The condition information part of the diagnostics area
contains a condition area for each condition. Condition areas
are numbered from 1 to the value of the
NUMBER statement condition item. If
NUMBER is 0, there are no condition areas.
Each condition area contains the items in the following list.
All items are standard SQL except
MYSQL_ERRNO, which is a MySQL extension.
The definitions apply for conditions generated other than by a
signal (that is, by a SIGNAL or
RESIGNAL statement). For
nonsignal conditions, MySQL populates only those condition
items not described as always empty. The effects of signals on
the condition area are described later.
CLASS_ORIGIN: A string containing the
class of the RETURNED_SQLSTATE value.
If the RETURNED_SQLSTATE value begins
with a class value defined in SQL standards document ISO
9075-2 (section 24.1, SQLSTATE),
CLASS_ORIGIN is 'ISO
9075'. Otherwise,
CLASS_ORIGIN is
'MySQL'.
SUBCLASS_ORIGIN: A string containing
the subclass of the RETURNED_SQLSTATE
value. If CLASS_ORIGIN is 'ISO
9075' or RETURNED_SQLSTATE
ends with '000',
SUBCLASS_ORIGIN is 'ISO
9075'. Otherwise,
SUBCLASS_ORIGIN is
'MySQL'.
RETURNED_SQLSTATE: A string that
indicates the SQLSTATE value for the
condition.
MESSAGE_TEXT: A string that indicates
the error message for the condition.
MYSQL_ERRNO: An integer that indicates
the MySQL error code for the condition.
CONSTRAINT_CATALOG,
CONSTRAINT_SCHEMA,
CONSTRAINT_NAME: Strings that indicate
the catalog, schema, and name for a violated constraint.
They are always empty.
CATALOG_NAME,
SCHEMA_NAME,
TABLE_NAME,
COLUMN_NAME: Strings that indicate the
catalog, schema, table, and column related to the
condition. They are always empty.
CURSOR_NAME: A string that indicates
the cursor name. This is always empty.
For the RETURNED_SQLSTATE,
MESSAGE_TEXT, and
MYSQL_ERRNO values for particular errors,
see Section B.3.1, “Server Error Message Reference”.
If a SIGNAL (or
RESIGNAL) statement populates
the diagnostics area, its SET clause can
assign to any condition information item except
RETURNED_SQLSTATE any value that is legal
for the item data type. SIGNAL
also sets the RETURNED_SQLSTATE value, but
not directly in its SET clause. That value
comes from the SIGNAL statement
SQLSTATE argument.
SIGNAL also sets statement
information items. It sets NUMBER to 1. It
sets ROW_COUNT to −1 for errors and 0
otherwise.
Nondiagnostic SQL statements populate the diagnostics area
automatically, and its contents can be set explicitly with the
SIGNAL and
RESIGNAL statements. The
diagnostics area can be examined with GET
DIAGNOSTICS to extract specific items, or with
SHOW WARNINGS or
SHOW ERRORS to see conditions
or errors.
SQL statements clear and set the diagnostics area as follows:
When the server starts executing a statement after parsing it, it clears the diagnostics area for nondiagnostic statements. Diagnostic statements do not clear the diagnostics area. These statements are diagnostic:
If a statement raises a condition, the diagnostics area is
cleared of conditions that belong to earlier statements.
The exception is that conditions raised by
GET DIAGNOSTICS and
RESIGNAL are added to the
diagnostics area without clearing it.
Thus, even a statement that does not normally clear the diagnostics area when it begins executing clears it if the statement raises a condition.
The following example shows the effect of various statements
on the diagnostics area, using SHOW
WARNINGS to display information about conditions
stored there.
This DROP TABLE statement
clears the diagnostics area and populates it when the
condition occurs:
mysql>DROP TABLE IF EXISTS test.no_such_table;Query OK, 0 rows affected, 1 warning (0.01 sec) mysql>SHOW WARNINGS;+-------+------+------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------+ | Note | 1051 | Unknown table 'test.no_such_table' | +-------+------+------------------------------------+ 1 row in set (0.00 sec)
This
SET
statement generates an error, so it clears and populates the
diagnostics area:
mysql>SET @x = @@x;ERROR 1193 (HY000): Unknown system variable 'x' mysql>SHOW WARNINGS;+-------+------+-----------------------------+ | Level | Code | Message | +-------+------+-----------------------------+ | Error | 1193 | Unknown system variable 'x' | +-------+------+-----------------------------+ 1 row in set (0.00 sec)
The previous
SET
statement produced a single condition, so 1 is the only valid
condition number for GET
DIAGNOSTICS at this point. The following statement
uses a condition number of 2, which produces a warning that is
added to the diagnostics area without clearing it:
mysql>GET DIAGNOSTICS CONDITION 2 @p = MESSAGE_TEXT;Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS;+-------+------+------------------------------+ | Level | Code | Message | +-------+------+------------------------------+ | Error | 1193 | Unknown system variable 'xx' | | Error | 1753 | Invalid condition number | +-------+------+------------------------------+ 2 rows in set (0.00 sec)
Now there are two conditions in the diagnostics area, so the
same GET DIAGNOSTICS statement
succeeds:
mysql>GET DIAGNOSTICS CONDITION 2 @p = MESSAGE_TEXT;Query OK, 0 rows affected (0.00 sec) mysql>SELECT @p;+--------------------------+ | @p | +--------------------------+ | Invalid condition number | +--------------------------+ 1 row in set (0.01 sec)
When a push to the diagnostics area stack occurs, the first (current) diagnostics area becomes the second (stacked) diagnostics area and a new current diagnostics area is created as a copy of it. Diagnostics areas are pushed to and popped from the stack under the following circumstances:
Execution of a stored program
A push occurs before the program executes and a pop occurs
afterward. If the stored program ends while handlers are
executing, there can be more than one diagnostics area to
pop; this occurs due to an exception for which there are
no appropriate handlers or due to
RETURN in the handler.
Any warning or error conditions in the popped diagnostics areas then are added to the current diagnostics area, except that, for triggers, only errors are added. When the stored program ends, the caller sees these conditions in its current diagonstics area.
Execution of a condition handler within a stored program
When a push occurs as a result of condition handler
activation, the stacked diagnostics area is the area that
was current within the stored program prior to the push.
The new now-current diagnostics area is the handler's
current diagnostics area.
GET
[CURRENT] DIAGNOSTICS and
GET
STACKED DIAGNOSTICS can be used within the
handler to access the contents of the current (handler)
and stacked (stored program) diagnostics areas. Initially,
they return the same result, but statements executing
within the handler modify the current diagnostics area,
clearing and setting its contents according to the normal
rules (see How the Diagnostics Area is Cleared and Populated).
The stacked diagnostics area cannot be modified by
statements executing within the handler except
RESIGNAL.
If the handler executes successfully, the current (handler) diagnostics area is popped and the stacked (stored program) diagnostics area again becomes the current diagnostics area. Conditions added to the handler diagnostics area during handler execution are added to the current diagnostics area.
Execution of RESIGNAL
The RESIGNAL statement
passes on the error condition information that is
available during execution of a condition handler within a
compound statement inside a stored program.
RESIGNAL may change some or
all information before passing it on, modifying the
diagnostics stack as described in
Section 13.6.7.4, “RESIGNAL Statement”.
Certain system variables control or are related to some aspects of the diagnostics area:
max_error_count controls
the number of condition areas in the diagnostics area. If
more conditions than this occur, MySQL silently discards
information for the excess conditions. (Conditions added
by RESIGNAL are always
added, with older conditions being discarded as necessary
to make room.)
warning_count indicates
the number of conditions that occurred. This includes
errors, warnings, and notes. Normally,
NUMBER and
warning_count are the
same. However, as the number of conditions generated
exceeds max_error_count,
the value of
warning_count continues
to rise whereas NUMBER remains capped
at max_error_count
because no additional conditions are stored in the
diagnostics area.
error_count indicates the
number of errors that occurred. This value includes
“not found” and exception conditions, but
excludes warnings and notes. Like
warning_count, its value
can exceed
max_error_count.
If the sql_notes system
variable is set to 0, notes are not stored and do not
increment warning_count.
Example: If max_error_count
is 10, the diagnostics area can contain a maximum of 10
condition areas. Suppose that a statement raises 20
conditions, 12 of which are errors. In that case, the
diagnostics area contains the first 10 conditions,
NUMBER is 10,
warning_count is 20, and
error_count is 12.
Changes to the value of
max_error_count have no
effect until the next attempt to modify the diagnostics area.
If the diagnostics area contains 10 condition areas and
max_error_count is set to 5,
that has no immediate effect on the size or content of the
diagnostics area.
If a stored procedure exits with an unhandled exception,
modified values of OUT and
INOUT parameters are not propogated back to
the caller.
If an exception is handled by a CONTINUE or
EXIT handler that contains a
RESIGNAL statement, execution of
RESIGNAL pops the Diagnostics
Area stack, thus signalling the exception (that is, the
information that existed before entry into the handler). If the
exception is an error, the values of OUT and
INOUT parameters are not propogated back to
the caller.
SIGNAL,
RESIGNAL, and
GET DIAGNOSTICS are not permissible
as prepared statements. For example, this statement is invalid:
PREPARE stmt1 FROM 'SIGNAL SQLSTATE "02000"';
SQLSTATE values in class
'04' are not treated specially. They are
handled the same as other exceptions.
In standard SQL, the first condition relates to the
SQLSTATE value returned for the previous SQL
statement. In MySQL, this is not guaranteed, so to get the main
error, you cannot do this:
GET DIAGNOSTICS CONDITION 1 @errno = MYSQL_ERRNO;
Instead, do this:
GET DIAGNOSTICS @cno = NUMBER; GET DIAGNOSTICS CONDITION @cno @errno = MYSQL_ERRNO;
MySQL account information is stored in the tables of the
mysql system schema. This database and the
access control system are discussed extensively in
Chapter 5, MySQL Server Administration, which you should consult
for additional details.
Some MySQL releases introduce changes to the grant tables to add new privileges or features. To make sure that you can take advantage of any new capabilities, update your grant tables to the current structure whenever you upgrade MySQL. See Section 2.11, “Upgrading MySQL”.
When the read_only system
variable is enabled, account-management statements require the
CONNECTION_ADMIN privilege (or the
deprecated SUPER privilege), in
addition to any other required privileges. This is because they
modify tables in the mysql system schema.
Account management statements are atomic and crash safe. For more information, see Section 13.1.1, “Atomic Data Definition Statement Support”.
ALTER USER [IF EXISTS]
user [auth_option] [, user [auth_option]] ...
[REQUIRE {NONE | tls_option [[AND] tls_option] ...}]
[WITH resource_option [resource_option] ...]
[password_option | lock_option] ...
[COMMENT 'comment_string' | ATTRIBUTE 'json_object']
ALTER USER [IF EXISTS] USER() user_func_auth_option
ALTER USER [IF EXISTS]
user DEFAULT ROLE
{NONE | ALL | role [, role ] ...}
user:
(see Section 6.2.4, “Specifying Account Names”)
auth_option: {
IDENTIFIED BY 'auth_string'
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
| IDENTIFIED BY RANDOM PASSWORD
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
| IDENTIFIED WITH auth_plugin
| IDENTIFIED WITH auth_plugin BY 'auth_string'
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
| IDENTIFIED WITH auth_plugin BY RANDOM PASSWORD
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
| IDENTIFIED WITH auth_plugin AS 'auth_string'
| DISCARD OLD PASSWORD
}
user_func_auth_option: {
IDENTIFIED BY 'auth_string'
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
| DISCARD OLD PASSWORD
}
tls_option: {
SSL
| X509
| CIPHER 'cipher'
| ISSUER 'issuer'
| SUBJECT 'subject'
}
resource_option: {
MAX_QUERIES_PER_HOUR count
| MAX_UPDATES_PER_HOUR count
| MAX_CONNECTIONS_PER_HOUR count
| MAX_USER_CONNECTIONS count
}
password_option: {
PASSWORD EXPIRE [DEFAULT | NEVER | INTERVAL N DAY]
| PASSWORD HISTORY {DEFAULT | N}
| PASSWORD REUSE INTERVAL {DEFAULT | N DAY}
| PASSWORD REQUIRE CURRENT [DEFAULT | OPTIONAL]
| FAILED_LOGIN_ATTEMPTS N
| PASSWORD_LOCK_TIME {N | UNBOUNDED}
}
lock_option: {
ACCOUNT LOCK
| ACCOUNT UNLOCK
}
The ALTER USER statement modifies
MySQL accounts. It enables authentication, role, SSL/TLS,
resource-limit, and password-management properties to be
modified for existing accounts. It can also be used to lock and
unlock accounts.
In most cases, ALTER USER
requires the global CREATE USER
privilege, or the UPDATE
privilege for the mysql system schema. The
exceptions are:
Any client who connects to the server using a nonanonymous
account can change the password for that account. (In
particular, you can change your own password.) To see which
account the server authenticated you as, invoke the
CURRENT_USER() function:
SELECT CURRENT_USER();
For DEFAULT ROLE syntax,
ALTER USER requires these
privileges:
Setting the default roles for another user requires the
global CREATE USER
privilege, or the UPDATE
privilege for the mysql.default_roles
system table.
Setting the default roles for yourself requires no special privileges, as long as the roles you want as the default have been granted to you.
Statements that modify secondary passwords require these privileges:
The
APPLICATION_PASSWORD_ADMIN
privilege is required to use the RETAIN CURRENT
PASSWORD or DISCARD OLD
PASSWORD clause for ALTER
USER statements that apply to your own
account. The privilege is required to manipulate your
own secondary password because most users require only
one password.
If an account is to be permitted to manipulate secondary
passwords for all accounts, it requires the
CREATE USER privilege
rather than
APPLICATION_PASSWORD_ADMIN.
When the read_only system
variable is enabled, ALTER USER
additionally requires the
CONNECTION_ADMIN privilege (or
the deprecated SUPER privilege).
By default, an error occurs if you try to modify a user that
does not exist. If the IF EXISTS clause is
given, the statement produces a warning for each named user that
does not exist, rather than an error.
Under some circumstances, ALTER
USER may be recorded in server logs or on the client
side in a history file such as
~/.mysql_history, which means that
cleartext passwords may be read by anyone having read access
to that information. For information about the conditions
under which this occurs for the server logs and how to control
it, see Section 6.1.2.3, “Passwords and Logging”. For similar
information about client-side logging, see
Section 4.5.1.3, “mysql Client Logging”.
There are several aspects to the ALTER
USER statement, described under the following topics:
For each affected account, ALTER
USER modifies the corresponding row in the
mysql.user system table to reflect the
properties specified in the statement. Unspecified properties
retain their current values.
Each account name uses the format described in
Section 6.2.4, “Specifying Account Names”. The host name part of the
account name, if omitted, defaults to '%'.
It is also possible to specify
CURRENT_USER or
CURRENT_USER() to refer to the
account associated with the current session.
For one syntax only, the account may be specified with the
USER() function:
ALTER USER USER() IDENTIFIED BY 'auth_string';
This syntax enables changing your own password without naming
your account literally. (The syntax also supports the
REPLACE, RETAIN CURRENT
PASSWORD, and DISCARD OLD
PASSWORD clauses described at
ALTER USER Authentication Options.)
MySQL 8.0.21 and later supports user comments and user
attributes, as described in Section 13.7.1.3, “CREATE USER Statement”.
These can be modified employing ALTER USER
by means of the COMMENT and
ATTRIBUTE options, respectively. You cannot
specify both options in the same ALTER USER
statement; attempting to do so results in a syntax error.
The user comment and user attribute are stored in the
INFORMATION_SCHEMA.USER_ATTRIBUTES
table as a JSON object; the user comment is stored as the
value for a comment key in the ATTRIBUTE
column of this table, as shown later in this discussion. The
COMMENT text can be any arbtitrary quoted
text, and replaces any existing user comment. The
ATTRIBUTE value must be the valid string
represntation of a JSON object. This is merged with any
existing user attribute as if the
JSON_MERGE_PATCH() function had
been used on the existing user attribute and the new one; for
any keys that are re-used, the new value overwrites the old
one, as shown here:
mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+----------------+ | USER | HOST | ATTRIBUTE | +------+-----------+----------------+ | bill | localhost | {"foo": "bar"} | +------+-----------+----------------+ 1 row in set (0.11 sec) mysql>ALTER USER 'bill'@'localhost' ATTRIBUTE '{"baz": "faz", "foo": "moo"}';Query OK, 0 rows affected (0.22 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+------------------------------+ | USER | HOST | ATTRIBUTE | +------+-----------+------------------------------+ | bill | localhost | {"baz": "faz", "foo": "moo"} | +------+-----------+------------------------------+ 1 row in set (0.00 sec)
To remove a key and its value from the user attribute, set the
key to JSON null (must be lowercase and
unquoted), like this:
mysql>ALTER USER 'bill'@'localhost' ATTRIBUTE '{"foo": null}';Query OK, 0 rows affected (0.08 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+----------------+ | USER | HOST | ATTRIBUTE | +------+-----------+----------------+ | bill | localhost | {"baz": "faz"} | +------+-----------+----------------+ 1 row in set (0.00 sec)
To set an existing user comment to an empty string, use
ALTER USER ... COMMENT ''. This leaves an
empty comment value in the
USER_ATTRIBUTES table; to remove
the user comment completely, use ALTER USER ...
ATTRIBUTE ... with the value for the column key set
to JSON null (unquoted, in lower case).
This is illustrated by the following sequence of SQL
statements:
mysql>ALTER USER 'bill'@'localhost' COMMENT 'Something about Bill';Query OK, 0 rows affected (0.06 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+---------------------------------------------------+ | USER | HOST | ATTRIBUTE | +------+-----------+---------------------------------------------------+ | bill | localhost | {"baz": "faz", "comment": "Something about Bill"} | +------+-----------+---------------------------------------------------+ 1 row in set (0.00 sec) mysql>ALTER USER 'bill'@'localhost' COMMENT '';Query OK, 0 rows affected (0.09 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+-------------------------------+ | USER | HOST | ATTRIBUTE | +------+-----------+-------------------------------+ | bill | localhost | {"baz": "faz", "comment": ""} | +------+-----------+-------------------------------+ 1 row in set (0.00 sec) mysql>ALTER USER 'bill'@'localhost' ATTRIBUTE '{"comment": null}';Query OK, 0 rows affected (0.07 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+----------------+ | USER | HOST | ATTRIBUTE | +------+-----------+----------------+ | bill | localhost | {"baz": "faz"} | +------+-----------+----------------+ 1 row in set (0.00 sec)
For ALTER USER syntax that
permits an auth_option value to
follow a user value,
auth_option indicates how the
account authenticates by specifying an account authentication
plugin, credentials (for example, a password), or both. Each
auth_option value applies
only to the account named immediately
preceding it.
Following the user specifications,
the statement may include options for SSL/TLS, resource-limit,
password-management, and locking properties. All such options
are global to the statement and apply to
all accounts named in the statement.
Example: Change an account's password and expire it. As a result, the user must connect with the named password and choose a new one at the next connection:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED BY 'new_password' PASSWORD EXPIRE;
Example: Modify an account to use the
caching_sha2_password authentication plugin
and the given password. Require that a new password be chosen
every 180 days, and enable failed-login tracking, such that
three consecutive incorrect passwords cause temporary account
locking for two days:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED WITH caching_sha2_password BY 'new_password'
PASSWORD EXPIRE INTERVAL 180 DAY
FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LOCK_TIME 2;
Example: Lock or unlock an account:
ALTER USER 'jeffrey'@'localhost' ACCOUNT LOCK; ALTER USER 'jeffrey'@'localhost' ACCOUNT UNLOCK;
Example: Require an account to connect using SSL and establish a limit of 20 connections per hour:
ALTER USER 'jeffrey'@'localhost' REQUIRE SSL WITH MAX_CONNECTIONS_PER_HOUR 20;
Example: Alter multiple accounts, specifying some per-account properties and some global properties:
ALTER USER
'jeffrey'@'localhost'
IDENTIFIED BY 'jeffrey_new_password',
'jeanne'@'localhost',
'josh'@'localhost'
IDENTIFIED BY 'josh_new_password'
REPLACE 'josh_current_password'
RETAIN CURRENT PASSWORD
REQUIRE SSL WITH MAX_USER_CONNECTIONS 2
PASSWORD HISTORY 5;
The IDENTIFIED BY value following
jeffrey applies only to its immediately
preceding account, so it changes the password to
'
only for jeffrey_new_password'jeffrey. For
jeanne, there is no per-account value (thus
leaving the password unchanged). For josh,
IDENTIFIED BY establishes a new password
('),
josh_new_password'REPLACE is specified to verify that the
user issuing the ALTER USER
statement knows the current password
('),
and that current password is also retained as the account
secondary password. (As a result, josh_current_password'josh can
connect with either the primary or secondary password.)
The remaining properties apply globally to all accounts named in the statement, so for both accounts:
Connections are required to use SSL.
The account can be used for a maximum of two simultaneous connections.
Password changes cannot reuse any of the five most recent passwords.
Example: Discard the secondary password for
josh, leaving the account with only its
primary password:
ALTER USER 'josh'@'localhost' DISCARD OLD PASSWORD;
In the absence of a particular type of option, the account remains unchanged in that respect. For example, with no locking option, the locking state of the account is not changed.
An account name may be followed by an
auth_option authentication option
that specifies the account authentication plugin, credentials,
or both. It may also include a password-verification clause
that specifies the account current password to be replaced,
and clauses that manage whether an account has a secondary
password.
Clauses for random password generation, password verification, and secondary passwords apply only to accounts that use an authentication plugin that stores credentials internally to MySQL. For accounts that use a plugin that performs authentication against a credentials system that is external to MySQL, password management must be handled externally against that system as well. For more information about internal credentials storage, see Section 6.2.15, “Password Management”.
auth_plugin names an
authentication plugin. The plugin name can be a quoted
string literal or an unquoted name. Plugin names are
stored in the plugin column of the
mysql.user system table.
For auth_option syntax that
does not specify an authentication plugin, the default
plugin is indicated by the value of the
default_authentication_plugin
system variable. For descriptions of each plugin, see
Section 6.4.1, “Authentication Plugins”.
Credentials that are stored internally are stored in the
mysql.user system table. An
'
value or auth_string'RANDOM PASSWORD specifies
account credentials, either as a cleartext (unencrypted)
string or hashed in the format expected by the
authentication plugin associated with the account,
respectively:
For syntax that uses BY
',
the string is cleartext and is passed to the
authentication plugin for possible hashing. The result
returned by the plugin is stored in the
auth_string'mysql.user table. A plugin may use
the value as specified, in which case no hashing
occurs.
For syntax that uses BY RANDOM
PASSWORD, MySQL generates a random password
and as cleartext and passes it to the authentication
plugin for possible hashing. The result returned by
the plugin is stored in the
mysql.user table. A plugin may use
the value as specified, in which case no hashing
occurs.
Randomly generated passwords are available as of MySQL 8.0.18 and have the characteristics described in Random Password Generation.
For syntax that uses AS
',
the string is assumed to be already in the format the
authentication plugin requires, and is stored as is in
the auth_string'mysql.user table. If a plugin
requires a hashed value, the value must be already
hashed in a format appropriate for the plugin, or the
value will not be usable by the plugin and correct
authentication of client connections will not occur.
As of MySQL 8.0.17, a hashed string can be either a
string literal or a hexadecimal value. The latter
corresponds to the type of value displayed by
SHOW CREATE USER for
password hashes containing unprintable characters when
the
print_identified_with_as_hex
system variable is enabled.
If an authentication plugin performs no hashing of the
authentication string, the BY
' and
auth_string'AS
'
clauses have the same effect: The authentication
string is stored as is in the
auth_string'mysql.user system table.
The REPLACE
'
clause performs password verification and is available as
of MySQL 8.0.13. If given:
current_auth_string'
REPLACE specifies the account
current password to be replaced, as a cleartext
(unencrypted) string.
The clause must be given if password changes for the are required to specify the current password, as verification that the user attempting to make the change actually knows the current password.
The clause is optional if password changes for the account may but need not specify the current password.
The statement fails if the clause is given but does not match the current password, even if the clause is optional.
REPLACE can be specified only when
changing the account password for the current user.
For more information about password verification by specifying the current password, see Section 6.2.15, “Password Management”.
The RETAIN CURRENT PASSWORD and
DISCARD OLD PASSWORD clauses implement
dual-password capability and are available as of MySQL
8.0.14. Both are optional, but if given, have the
following effects:
RETAIN CURRENT PASSWORD retains an
account current password as its secondary password,
replacing any existing secondary password. The new
password becomes the primary password, but clients can
use the account to connect to the server using either
the primary or secondary password. (Exception: If the
new password specified by the
ALTER USER statement is
empty, the secondary password becomes empty as well,
even if RETAIN CURRENT PASSWORD is
given.)
If you specify RETAIN CURRENT
PASSWORD for an account that has an empty
primary password, the statement fails.
If an account has a secondary password and you change
its primary password without specifying
RETAIN CURRENT PASSWORD, the
secondary password remains unchanged.
If you change the authentication plugin assigned to
the account, the secondary password is discarded. If
you change the authentication plugin and also specify
RETAIN CURRENT PASSWORD, the
statement fails.
DISCARD OLD PASSWORD discards the
secondary password, if one exists. The account retains
only its primary password, and clients can use the
account to connect to the server only with the primary
password.
For more information about use of dual passwords, see Section 6.2.15, “Password Management”.
ALTER USER permits these
auth_option syntaxes:
IDENTIFIED BY
'
auth_string' [REPLACE
'current_auth_string'] [RETAIN
CURRENT PASSWORD]
Sets the account authentication plugin to the default
plugin, passes the cleartext
'
value to the plugin for possible hashing, and stores the
result in the account row in the
auth_string'mysql.user system table.
The REPLACE clause, if given, specifies
the account current password, as described previously in
this section.
The RETAIN CURRENT PASSWORD clause, if
given, causes the account current password to be retained
as its secondary password, as described previously in this
section.
IDENTIFIED BY RANDOM PASSWORD [REPLACE
'
current_auth_string'] [RETAIN
CURRENT PASSWORD]
Sets the account authentication plugin to the default
plugin, generates a random password, passes the cleartext
password value to the plugin for possible hashing, and
stores the result in the account row in the
mysql.user system table. The statement
also returns the cleartext password in a result set to
make it available to the user or application executing the
statement. For details about the result set and
characteristics of randomly generated passwords, see
Random Password Generation.
The REPLACE clause, if given, specifies
the account current password, as described previously in
this section.
The RETAIN CURRENT PASSWORD clause, if
given, causes the account current password to be retained
as its secondary password, as described previously in this
section.
IDENTIFIED WITH
auth_plugin
Sets the account authentication plugin to
auth_plugin, clears the
credentials to the empty string (the credentials are
associated with the old authentication plugin, not the new
one), and stores the result in the account row in the
mysql.user system table.
In addition, the password is marked expired. The user must choose a new one when next connecting.
IDENTIFIED WITH
auth_plugin BY
'auth_string' [REPLACE
'current_auth_string'] [RETAIN
CURRENT PASSWORD]
Sets the account authentication plugin to
auth_plugin, passes the
cleartext
'
value to the plugin for possible hashing, and stores the
result in the account row in the
auth_string'mysql.user system table.
The REPLACE clause, if given, specifies
the account current password, as described previously in
this section.
The RETAIN CURRENT PASSWORD clause, if
given, causes the account current password to be retained
as its secondary password, as described previously in this
section.
IDENTIFIED WITH
auth_plugin BY RANDOM PASSWORD
[REPLACE 'current_auth_string']
[RETAIN CURRENT PASSWORD]
Sets the account authentication plugin to
auth_plugin, generates a random
password, passes the cleartext password value to the
plugin for possible hashing, and stores the result in the
account row in the mysql.user system
table. The statement also returns the cleartext password
in a result set to make it available to the user or
application executing the statement. For details about the
result set and characteristics of randomly generated
passwords, see
Random Password Generation.
The REPLACE clause, if given, specifies
the account current password, as described previously in
this section.
The RETAIN CURRENT PASSWORD clause, if
given, causes the account current password to be retained
as its secondary password, as described previously in this
section.
IDENTIFIED WITH
auth_plugin AS
'auth_string'
Sets the account authentication plugin to
auth_plugin and stores the
'
value as is in the auth_string'mysql.user account
row. If the plugin requires a hashed string, the string is
assumed to be already hashed in the format the plugin
requires.
DISCARD OLD PASSWORD
Discards the account secondary password, if there is one, as described previously in this section.
Example: Specify the password as cleartext; the default plugin is used:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED BY 'password';
Example: Specify the authentication plugin, along with a cleartext password value:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED WITH mysql_native_password
BY 'password';
Example: Like the preceding example, but in addition, specify the current password as a cleartext value to satisfy any account requirement that the user making the change knows that password:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED WITH mysql_native_password
BY 'password'
REPLACE 'current_password';
The preceding statement fails unless the current user is
jeffrey because REPLACE
is permitted only for changes to the current user's password.
Example: Establish a new primary password and retain the existing password as the secondary password:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED BY 'new_password'
RETAIN CURRENT PASSWORD;
Example: Discard the secondary password, leaving the account with only its primary password:
ALTER USER 'jeffery'@'localhost' DISCARD OLD PASSWORD;
Example: Specify the authentication plugin, along with a hashed password value:
ALTER USER 'jeffrey'@'localhost'
IDENTIFIED WITH mysql_native_password
AS '*6C8989366EAF75BB670AD8EA7A7FC1176A95CEF4';For additional information about setting passwords and authentication plugins, see Section 6.2.14, “Assigning Account Passwords”, and Section 6.2.17, “Pluggable Authentication”.
ALTER USER ...
DEFAULT ROLE defines which roles become active when
the user connects to the server and authenticates, or when the
user executes the
SET ROLE
DEFAULT statement during a session.
ALTER USER ...
DEFAULT ROLE is alternative syntax for
SET DEFAULT ROLE (see
Section 13.7.1.9, “SET DEFAULT ROLE Statement”). However,
ALTER USER can set the default
for only a single user, whereas SET
DEFAULT ROLE can set the default for multiple users.
On the other hand, you can specify
CURRENT_USER as the user name for the
ALTER USER statement, whereas
you cannot for SET DEFAULT
ROLE.
Each user account name uses the format described previously.
Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
ALTER USER 'joe'@'10.0.0.1' DEFAULT ROLE administrator, developer;
The host name part of the role name, if omitted, defaults to
'%'.
The clause following the DEFAULT ROLE
keywords permits these values:
NONE: Set the default to
NONE (no roles).
ALL: Set the default to all roles
granted to the account.
: Set the
default to the named roles, which must exist and be
granted to the account at the time
role [,
role ] ...ALTER USER ...
DEFAULT ROLE is executed.
MySQL can check X.509 certificate attributes in addition to the usual authentication that is based on the user name and credentials. For background information on the use of SSL/TLS with MySQL, see Section 6.3, “Using Encrypted Connections”.
To specify SSL/TLS-related options for a MySQL account, use a
REQUIRE clause that specifies one or more
tls_option values.
Order of REQUIRE options does not matter,
but no option can be specified twice. The
AND keyword is optional between
REQUIRE options.
ALTER USER permits these
tls_option values:
NONE
Indicates that all accounts named by the statement have no SSL or X.509 requirements. Unencrypted connections are permitted if the user name and password are valid. Encrypted connections can be used, at the client's option, if the client has the proper certificate and key files.
ALTER USER 'jeffrey'@'localhost' REQUIRE NONE;
Clients attempt to establish a secure connection by
default. For clients that have REQUIRE
NONE, the connection attempt falls back to an
unencrypted connection if a secure connection cannot be
established. To require an encrypted connection, a client
need specify only the
--ssl-mode=REQUIRED
option; the connection attempt fails if a secure
connection cannot be established.
SSL
Tells the server to permit only encrypted connections for all accounts named by the statement.
ALTER USER 'jeffrey'@'localhost' REQUIRE SSL;
Clients attempt to establish a secure connection by
default. For accounts that have REQUIRE
SSL, the connection attempt fails if a secure
connection cannot be established.
X509
For all accounts named by the statement, requires that
clients present a valid certificate, but the exact
certificate, issuer, and subject do not matter. The only
requirement is that it should be possible to verify its
signature with one of the CA certificates. Use of X.509
certificates always implies encryption, so the
SSL option is unnecessary in this case.
ALTER USER 'jeffrey'@'localhost' REQUIRE X509;
For accounts with REQUIRE X509, clients
must specify the --ssl-key
and --ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.) This is true for ISSUER
and SUBJECT as well because those
REQUIRE options imply the requirements
of X509.
ISSUER
'
issuer'
For all accounts named by the statement, requires that
clients present a valid X.509 certificate issued by CA
'. If
a client presents a certificate that is valid but has a
different issuer, the server rejects the connection. Use
of X.509 certificates always implies encryption, so the
issuer'SSL option is unnecessary in this case.
ALTER USER 'jeffrey'@'localhost'
REQUIRE ISSUER '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL/CN=CA/emailAddress=ca@example.com';
Because ISSUER implies the requirements
of X509, clients must specify the
--ssl-key and
--ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.)
SUBJECT
'
subject'
For all accounts named by the statement, requires that
clients present a valid X.509 certificate containing the
subject subject. If a client
presents a certificate that is valid but has a different
subject, the server rejects the connection. Use of X.509
certificates always implies encryption, so the
SSL option is unnecessary in this case.
ALTER USER 'jeffrey'@'localhost'
REQUIRE SUBJECT '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL demo client certificate/
CN=client/emailAddress=client@example.com';
MySQL does a simple string comparison of the
'
value to the value in the certificate, so lettercase and
component ordering must be given exactly as present in the
certificate.
subject'
Because SUBJECT implies the
requirements of X509, clients must
specify the --ssl-key and
--ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.)
CIPHER
'
cipher'
For all accounts named by the statement, requires a specific cipher method for encrypting connections. This option is needed to ensure that ciphers and key lengths of sufficient strength are used. Encryption can be weak if old algorithms using short encryption keys are used.
ALTER USER 'jeffrey'@'localhost' REQUIRE CIPHER 'EDH-RSA-DES-CBC3-SHA';
The SUBJECT, ISSUER, and
CIPHER options can be combined in the
REQUIRE clause:
ALTER USER 'jeffrey'@'localhost'
REQUIRE SUBJECT '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL demo client certificate/
CN=client/emailAddress=client@example.com'
AND ISSUER '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL/CN=CA/emailAddress=ca@example.com'
AND CIPHER 'EDH-RSA-DES-CBC3-SHA';
It is possible to place limits on use of server resources by
an account, as discussed in Section 6.2.20, “Setting Account Resource Limits”.
To do so, use a WITH clause that specifies
one or more resource_option values.
Order of WITH options does not matter,
except that if a given resource limit is specified multiple
times, the last instance takes precedence.
ALTER USER permits these
resource_option values:
MAX_QUERIES_PER_HOUR
,
countMAX_UPDATES_PER_HOUR
,
countMAX_CONNECTIONS_PER_HOUR
count
For all accounts named by the statement, these options
restrict how many queries, updates, and connections to the
server are permitted to each account during any given
one-hour period. If count is
0 (the default), this means that there
is no limitation for the account.
MAX_USER_CONNECTIONS
count
For all accounts named by the statement, restricts the
maximum number of simultaneous connections to the server
by each account. A nonzero
count specifies the limit for
the account explicitly. If
count is 0
(the default), the server determines the number of
simultaneous connections for the account from the global
value of the
max_user_connections
system variable. If
max_user_connections is
also zero, there is no limit for the account.
Example:
ALTER USER 'jeffrey'@'localhost' WITH MAX_QUERIES_PER_HOUR 500 MAX_UPDATES_PER_HOUR 100;
ALTER USER supports several
password_option values for password
management:
Password expiration options: You can expire an account password manually and establish its password expiration policy. Policy options do not expire the password. Instead, they determine how the server applies automatic expiration to the account based on password age, which is assessed from the date and time of the most recent account password change.
Password reuse options: You can restrict password reuse based on number of password changes, time elapsed, or both.
Password verification-required options: You can indicate whether attempts to change an account password must specify the current password, as verification that the user attempting to make the change actually knows the current password.
Incorrect-password failed-login tracking options: You can cause the server to track failed login attempts and temporarily lock accounts for which too many consecutive incorrect passwords are given. The required number of failures and the lock time are configurable.
This section describes the syntax for password-management options. For information about establishing policy for password management, see Section 6.2.15, “Password Management”.
If multiple password-management options of a given type are
specified, the last one takes precedence. For example,
PASSWORD EXPIRE DEFAULT PASSWORD EXPIRE
NEVER is the same as PASSWORD EXPIRE
NEVER.
Except for the options that pertain to failed-login tracking, password-management options apply only to accounts that use an authentication plugin that stores credentials internally to MySQL. For accounts that use a plugin that performs authentication against a credentials system that is external to MySQL, password management must be handled externally against that system as well. For more information about internal credentials storage, see Section 6.2.15, “Password Management”.
A client has an expired password if the account password was expired manually or the password age is considered greater than its permitted lifetime per the automatic expiration policy. In this case, the server either disconnects the client or restricts the operations permitted to it (see Section 6.2.16, “Server Handling of Expired Passwords”). Operations performed by a restricted client result in an error until the user establishes a new account password.
Although it is possible to “reset” an expired password by setting it to its current value, it is preferable, as a matter of good policy, to choose a different password. DBAs can enforce non-reuse by establishing an appropriate password-reuse policy. See Password Reuse Policy.
ALTER USER permits these
password_option values for
controlling password expiration:
PASSWORD EXPIRE
Immediately marks the password expired for all accounts named by the statement.
ALTER USER 'jeffrey'@'localhost' PASSWORD EXPIRE;
PASSWORD EXPIRE DEFAULT
Sets all accounts named by the statement so that the
global expiration policy applies, as specified by the
default_password_lifetime
system variable.
ALTER USER 'jeffrey'@'localhost' PASSWORD EXPIRE DEFAULT;
PASSWORD EXPIRE NEVER
This expiration option overrides the global policy for all accounts named by the statement. For each, it disables password expiration so that the password never expires.
ALTER USER 'jeffrey'@'localhost' PASSWORD EXPIRE NEVER;
PASSWORD EXPIRE INTERVAL
N DAY
This expiration option overrides the global policy for all
accounts named by the statement. For each, it sets the
password lifetime to N days.
The following statement requires the password to be
changed every 180 days:
ALTER USER 'jeffrey'@'localhost' PASSWORD EXPIRE INTERVAL 180 DAY;
ALTER USER permits these
password_option values for
controlling reuse of previous passwords based on required
minimum number of password changes:
PASSWORD HISTORY DEFAULT
Sets all accounts named by the statement so that the
global policy about password history length applies, to
prohibit reuse of passwords before the number of changes
specified by the
password_history system
variable.
ALTER USER 'jeffrey'@'localhost' PASSWORD HISTORY DEFAULT;
PASSWORD HISTORY
N
This history-length option overrides the global policy for
all accounts named by the statement. For each, it sets the
password history length to N
passwords, to prohibit reusing any of the
N most recently chosen
passwords. The following statement prohibits reuse of any
of the previous 6 passwords:
ALTER USER 'jeffrey'@'localhost' PASSWORD HISTORY 6;
ALTER USER permits these
password_option values for
controlling reuse of previous passwords based on time elapsed:
PASSWORD REUSE INTERVAL DEFAULT
Sets all statements named by the account so that the
global policy about time elapsed applies, to prohibit
reuse of passwords newer than the number of days specified
by the
password_reuse_interval
system variable.
ALTER USER 'jeffrey'@'localhost' PASSWORD REUSE INTERVAL DEFAULT;
PASSWORD REUSE INTERVAL
N DAY
This time-elapsed option overrides the global policy for
all accounts named by the statement. For each, it sets the
password reuse interval to N
days, to prohibit reuse of passwords newer than that many
days. The following statement prohibits password reuse for
360 days:
ALTER USER 'jeffrey'@'localhost' PASSWORD REUSE INTERVAL 360 DAY;
ALTER USER permits these
password_option values for
controlling whether attempts to change an account password
must specify the current password, as verification that the
user attempting to make the change actually knows the current
password:
PASSWORD REQUIRE CURRENT
This verification option overrides the global policy for all accounts named by the statement. For each, it requires that password changes specify the current password.
ALTER USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT;
PASSWORD REQUIRE CURRENT OPTIONAL
This verification option overrides the global policy for all accounts named by the statement. For each, it does not require that password changes specify the current password. (The current password may but need not be given.)
ALTER USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT OPTIONAL;
PASSWORD REQUIRE CURRENT DEFAULT
Sets all statements named by the account so that the
global policy about password verification applies, as
specified by the
password_require_current
system variable.
ALTER USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT DEFAULT;
As of MySQL 8.0.19, ALTER USER
permits these password_option
values for controlling failed-login tracking:
FAILED_LOGIN_ATTEMPTS
N
Whether to track account login attempts that specify an
incorrect password. N must be a
number from 0 to 32767. A value of 0 disables failed-login
tracking. Values greater than 0 indicate how many
consecutive password failures cause temporary account
locking (if PASSWORD_LOCK_TIME is also
nonzero).
PASSWORD_LOCK_TIME {
N
| UNBOUNDED}
How long to lock the account after too many consecutive
login attempts provide an incorrect password.
N must be a number from 0 to
32767, or UNBOUNDED. A value of 0
disables temporary account locking. Values greater than 0
indicate how long to lock the account in days. A value of
UNBOUNDED causes the account locking
duration to be unbounded; once locked, the account remains
in a locked state until unlocked. For information about
the conditions under which unlocking occurs, see
Failed-Login Tracking and Temporary Account Locking.
For failed-login tracking and temporary locking to occur, an
account's FAILED_LOGIN_ATTEMPTS and
PASSWORD_LOCK_TIME options both must be
nonzero. The following statement modifies an account such that
it remains locked for two days after four consecutive password
failures:
ALTER USER 'jeffrey'@'localhost' FAILED_LOGIN_ATTEMPTS 4 PASSWORD_LOCK_TIME 2;
MySQL supports account locking and unlocking using the
ACCOUNT LOCK and ACCOUNT
UNLOCK options, which specify the locking state for
an account. For additional discussion, see
Section 6.2.19, “Account Locking”.
If multiple account-locking options are specified, the last one takes precedence.
As of MySQL 8.0.19,
ALTER USER ...
UNLOCK unlocks any account named by the statement
that is temporarily locked due to too many failed logins. See
Section 6.2.15, “Password Management”.
ALTER USER is written to the
binary log if it succeeds, but not if it fails; in that case,
rollback occurs and no changes are made. A statement written
to the binary log includes all named users. If the IF
EXISTS clause is given, this includes even users
that do not exist and were not altered.
If the original statement changes the credentials for a user, the statement written to the binary log specifies the applicable authentication plugin for that user, determined as follows:
The plugin named in the original statement, if one was specified.
Otherwise, the plugin associated with the user account if the user exists, or the default authentication plugin if the user does not exist. (If the statement written to the binary log must specify a particular authentication plugin for a user, include it in the original statement.)
If the server adds the default authentication plugin for any users in the statement written to the binary log, it writes a warning to the error log naming those users.
If the original statement specifies the
FAILED_LOGIN_ATTEMPTS or
PASSWORD_LOCK_TIME option, the statement
written to the binary log includes the option.
CREATE ROLE [IF NOT EXISTS]role[,role] ...
CREATE ROLE creates one or more
roles, which are named collections of privileges. To use this
statement, you must have the global CREATE
ROLE or CREATE USER
privilege. When the read_only
system variable is enabled, CREATE
ROLE additionally requires the
CONNECTION_ADMIN privilege (or
the deprecated SUPER privilege).
A role when created is locked, has no password, and is assigned
the default authentication plugin. (These role attributes can be
changed later with the ALTER USER
statement, by users who have the global
CREATE USER privilege.)
CREATE ROLE either succeeds for
all named roles or rolls back and has no effect if any error
occurs. By default, an error occurs if you try to create a role
that already exists. If the IF NOT EXISTS
clause is given, the statement produces a warning for each named
role that already exists, rather than an error.
The statement is written to the binary log if it succeeds, but
not if it fails; in that case, rollback occurs and no changes
are made. A statement written to the binary log includes all
named roles. If the IF NOT EXISTS clause is
given, this includes even roles that already exist and were not
created.
Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
CREATE ROLE 'administrator', 'developer'; CREATE ROLE 'webapp'@'localhost';
The host name part of the role name, if omitted, defaults to
'%'.
For role usage examples, see Section 6.2.10, “Using Roles”.
CREATE USER [IF NOT EXISTS]
user [auth_option] [, user [auth_option]] ...
DEFAULT ROLE role [, role ] ...
[REQUIRE {NONE | tls_option [[AND] tls_option] ...}]
[WITH resource_option [resource_option] ...]
[password_option | lock_option] ...
[COMMENT 'comment_string' | ATTRIBUTE 'json_object']
user:
(see Section 6.2.4, “Specifying Account Names”)
auth_option: {
IDENTIFIED BY 'auth_string'
| IDENTIFIED BY RANDOM PASSWORD
| IDENTIFIED WITH auth_plugin
| IDENTIFIED WITH auth_plugin BY 'auth_string'
| IDENTIFIED WITH auth_plugin BY RANDOM PASSWORD
| IDENTIFIED WITH auth_plugin AS 'auth_string'
}
tls_option: {
SSL
| X509
| CIPHER 'cipher'
| ISSUER 'issuer'
| SUBJECT 'subject'
}
resource_option: {
MAX_QUERIES_PER_HOUR count
| MAX_UPDATES_PER_HOUR count
| MAX_CONNECTIONS_PER_HOUR count
| MAX_USER_CONNECTIONS count
}
password_option: {
PASSWORD EXPIRE [DEFAULT | NEVER | INTERVAL N DAY]
| PASSWORD HISTORY {DEFAULT | N}
| PASSWORD REUSE INTERVAL {DEFAULT | N DAY}
| PASSWORD REQUIRE CURRENT [DEFAULT | OPTIONAL]
| FAILED_LOGIN_ATTEMPTS N
| PASSWORD_LOCK_TIME {N | UNBOUNDED}
}
lock_option: {
ACCOUNT LOCK
| ACCOUNT UNLOCK
}
The CREATE USER statement creates
new MySQL accounts. It enables authentication, role, SSL/TLS,
resource-limit, and password-management properties to be
established for new accounts. It also controls whether accounts
are initially locked or unlocked.
To use CREATE USER, you must have
the global CREATE USER privilege,
or the INSERT privilege for the
mysql system schema. When the
read_only system variable is
enabled, CREATE USER additionally
requires the CONNECTION_ADMIN
privilege (or the deprecated
SUPER privilege).
CREATE USER either succeeds for
all named users or rolls back and has no effect if any error
occurs. By default, an error occurs if you try to create a user
that already exists. If the IF NOT EXISTS
clause is given, the statement produces a warning for each named
user that already exists, rather than an error.
Under some circumstances, CREATE
USER may be recorded in server logs or on the client
side in a history file such as
~/.mysql_history, which means that
cleartext passwords may be read by anyone having read access
to that information. For information about the conditions
under which this occurs for the server logs and how to control
it, see Section 6.1.2.3, “Passwords and Logging”. For similar
information about client-side logging, see
Section 4.5.1.3, “mysql Client Logging”.
There are several aspects to the CREATE
USER statement, described under the following topics:
For each account, CREATE USER
creates a new row in the mysql.user system
table. The account row reflects the properties specified in
the statement. Unspecified properties are set to their default
values:
Authentication: The authentication plugin defined by the
default_authentication_plugin
system variable, and empty credentials
Default role: NONE
SSL/TLS: NONE
Resource limits: Unlimited
Password management: PASSWORD EXPIRE DEFAULT
PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT
PASSWORD REQUIRE CURRENT DEFAULT; failed-login
tracking and temporary account locking are disabled
Account locking: ACCOUNT UNLOCK
An account when first created has no privileges and a default
role of NONE. To assign privileges or
roles, use the GRANT statement.
Each account name uses the format described in Section 6.2.4, “Specifying Account Names”. For example:
CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'password';
The host name part of the account name, if omitted, defaults
to '%'.
Each user value naming an account
may be followed by an optional
auth_option value that indicates
how the account authenticates. These values enable account
authentication plugins and credentials (for example, a
password) to be specified. Each
auth_option value applies
only to the account named immediately
preceding it.
Following the user specifications,
the statement may include options for SSL/TLS, resource-limit,
password-management, and locking properties. All such options
are global to the statement and apply to
all accounts named in the statement.
Example: Create an account that uses the default authentication plugin and the given password. Mark the password expired so that the user must choose a new one at the first connection to the server:
CREATE USER 'jeffrey'@'localhost'
IDENTIFIED BY 'new_password' PASSWORD EXPIRE;
Example: Create an account that uses the
caching_sha2_password authentication plugin
and the given password. Require that a new password be chosen
every 180 days, and enable failed-login tracking, such that
three consecutive incorrect passwords cause temporary account
locking for two days:
CREATE USER 'jeffrey'@'localhost'
IDENTIFIED WITH caching_sha2_password BY 'new_password'
PASSWORD EXPIRE INTERVAL 180 DAY
FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LOCK_TIME 2;
Example: Create multiple accounts, specifying some per-account properties and some global properties:
CREATE USER
'jeffrey'@'localhost' IDENTIFIED WITH mysql_native_password
BY 'new_password1',
'jeanne'@'localhost' IDENTIFIED WITH caching_sha2_password
BY 'new_password2'
REQUIRE X509 WITH MAX_QUERIES_PER_HOUR 60
PASSWORD HISTORY 5
ACCOUNT LOCK;
Each auth_option value
(IDENTIFIED WITH ... BY in this case)
applies only to the account named immediately preceding it, so
each account uses the immediately following authentication
plugin and password.
The remaining properties apply globally to all accounts named in the statement, so for both accounts:
Connections must be made using a valid X.509 certificate.
Up to 60 queries per hour are permitted.
Password changes cannot reuse any of the five most recent passwords.
The account is locked initially, so effectively it is a placeholder and cannot be used until an administrator unlocks it.
Beginning with MySQL 8.0.21, you can optionally create a user with a user comment or a user attribute, as described here:
User comment
To set a user comment, add COMMENT
' to the
user_comment'CREATE USER statement, where
user_comment is the text of the
user comment.
Example (omitting any other options):
CREATE USER 'jon'@'localhost' COMMENT 'Some information about Jon';
User attribute
A user attribute is a JSON object made up of one or more
key-value pairs, and is set by including
ATTRIBUTE
' as part
of of json_object'CREATE USER.
json_object must be a valid
JSON object.
Example (omitting any other options):
CREATE USER 'jim'@'localhost'
ATTRIBUTE '{"fname": "James", "lname": "Scott", "phone": "123-456-7890"}';
User comments and user attributes are stored together in the
ATTRIBUTE column of the
INFORMATION_SCHEMA.USER_ATTRIBUTES
table. This query displays the row in this table inserted by
the statement just shown for creating the user
jin@localhost:
mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER = 'jim' AND HOST = 'localhost'\G*************************** 1. row *************************** USER: jim HOST: localhost ATTRIBUTE: {"fname": "James", "lname": "Scott", "phone": "123-456-7890"} 1 row in set (0.00 sec)
The COMMENT option in actuality provides a
shortcut for setting a user attribute whose only element has
comment as its key and whose value is the
argument supplied for the option. You can see this by
executing the statement CREATE USER 'jon'@'localhost'
COMMENT 'Some information about Jon', and observing
the row which it inserts into the
USER_ATTRIBUTES table:
mysql>CREATE USER 'jon'@'localhost' COMMENT 'Some information about Jon';Query OK, 0 rows affected (0.06 sec) mysql>SELECT * FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER = 'jon' AND HOST = 'localhost';+------+-----------+-------------------------------------------+ | USER | HOST | ATTRIBUTE | +------+-----------+-------------------------------------------+ | jon | localhost | {"comment": "Some information about Jon"} | +------+-----------+-------------------------------------------+ 1 row in set (0.00 sec)
You cannot use COMMENT and
ATTRIBUTE together in the same
CREATE USER statement; attempting to do so
causes a syntax error. To set a user comment concurrently with
setting a user attribute, use ATTRIBUTE and
include in its argument a value with a
comment key, like this:
mysql> CREATE USER 'bill'@'localhost'
-> ATTRIBUTE '{"fname":"William", "lname":"Schmidt",
-> "comment":"Website developer"}';
Query OK, 0 rows affected (0.16 sec)
Since the content of the ATTRIBUTE row is a
JSON object, you can employ any appropriate MySQL JSON
functions or operators to manipulate it, as shown here:
mysql>SELECT->USER AS User,->HOST AS Host,->CONCAT(ATTRIBUTE->>"$.fname"," ",ATTRIBUTE->>"$.lname") AS 'Full Name',->ATTRIBUTE->>"$.comment" AS Comment->FROM INFORMATION_SCHEMA.USER_ATTRIBUTES->WHERE USER='bill' AND HOST='localhost';+------+-----------+-----------------+-------------------+ | User | Host | Full Name | Comment | +------+-----------+-----------------+-------------------+ | bill | localhost | William Schmidt | Website developer | +------+-----------+-----------------+-------------------+ 1 row in set (0.00 sec)
To set or to make changes in the user comment or user
attribute for an existing user, you can use a
COMMENT or ATTRIBUTE
option with an ALTER USER
statement.
Because the user comment and user attribute are stored
together internally in a single
JSON column, this sets an upper
limit on their maximum combined size; see
JSON Storage Requirements, for more
information.
See also the description of the Information Schema
USER_ATTRIBUTES table for more
information and examples.
An account name may be followed by an
auth_option authentication option
that specifies the account authentication plugin, credentials,
or both.
Clauses for random password generation apply only to accounts that use an authentication plugin that stores credentials internally to MySQL. For accounts that use a plugin that performs authentication against a credentials system that is external to MySQL, password management must be handled externally against that system as well. For more information about internal credentials storage, see Section 6.2.15, “Password Management”.
auth_plugin names an
authentication plugin. The plugin name can be a quoted
string literal or an unquoted name. Plugin names are
stored in the plugin column of the
mysql.user system table.
For auth_option syntax that
does not specify an authentication plugin, the default
plugin is indicated by the value of the
default_authentication_plugin
system variable. For descriptions of each plugin, see
Section 6.4.1, “Authentication Plugins”.
Credentials that are stored internally are stored in the
mysql.user system table. An
'
value or auth_string'RANDOM PASSWORD specifies
account credentials, either as a cleartext (unencrypted)
string or hashed in the format expected by the
authentication plugin associated with the account,
respectively:
For syntax that uses BY
',
the string is cleartext and is passed to the
authentication plugin for possible hashing. The result
returned by the plugin is stored in the
auth_string'mysql.user table. A plugin may use
the value as specified, in which case no hashing
occurs.
For syntax that uses BY RANDOM
PASSWORD, MySQL generates a random password
and as cleartext and passes it to the authentication
plugin for possible hashing. The result returned by
the plugin is stored in the
mysql.user table. A plugin may use
the value as specified, in which case no hashing
occurs.
Randomly generated passwords are available as of MySQL 8.0.18 and have the characteristics described in Random Password Generation.
For syntax that uses AS
',
the string is assumed to be already in the format the
authentication plugin requires, and is stored as is in
the auth_string'mysql.user table. If a plugin
requires a hashed value, the value must be already
hashed in a format appropriate for the plugin, or the
value will not be usable by the plugin and correct
authentication of client connections will not occur.
As of MySQL 8.0.17, a hashed string can be either a
string literal or a hexadecimal value. The latter
corresponds to the type of value displayed by
SHOW CREATE USER for
password hashes containing unprintable characters when
the
print_identified_with_as_hex
system variable is enabled.
If an authentication plugin performs no hashing of the
authentication string, the BY
' and
auth_string'AS
'
clauses have the same effect: The authentication
string is stored as is in the
auth_string'mysql.user system table.
CREATE USER permits these
auth_option syntaxes:
IDENTIFIED BY
'
auth_string'
Sets the account authentication plugin to the default
plugin, passes the cleartext
'
value to the plugin for possible hashing, and stores the
result in the account row in the
auth_string'mysql.user system table.
IDENTIFIED BY RANDOM PASSWORD
Sets the account authentication plugin to the default
plugin, generates a random password, passes the cleartext
password value to the plugin for possible hashing, and
stores the result in the account row in the
mysql.user system table. The statement
also returns the cleartext password in a result set to
make it available to the user or application executing the
statement. For details about the result set and
characteristics of randomly generated passwords, see
Random Password Generation.
IDENTIFIED WITH
auth_plugin
Sets the account authentication plugin to
auth_plugin, clears the
credentials to the empty string, and stores the result in
the account row in the mysql.user
system table.
IDENTIFIED WITH
auth_plugin BY
'auth_string'
Sets the account authentication plugin to
auth_plugin, passes the
cleartext
'
value to the plugin for possible hashing, and stores the
result in the account row in the
auth_string'mysql.user system table.
IDENTIFIED WITH
auth_plugin BY RANDOM
PASSWORD
Sets the account authentication plugin to
auth_plugin, generates a random
password, passes the cleartext password value to the
plugin for possible hashing, and stores the result in the
account row in the mysql.user system
table. The statement also returns the cleartext password
in a result set to make it available to the user or
application executing the statement. For details about the
result set and characteristics of randomly generated
passwords, see
Random Password Generation.
IDENTIFIED WITH
auth_plugin AS
'auth_string'
Sets the account authentication plugin to
auth_plugin and stores the
'
value as is in the auth_string'mysql.user account
row. If the plugin requires a hashed string, the string is
assumed to be already hashed in the format the plugin
requires.
Example: Specify the password as cleartext; the default plugin is used:
CREATE USER 'jeffrey'@'localhost'
IDENTIFIED BY 'password';
Example: Specify the authentication plugin, along with a cleartext password value:
CREATE USER 'jeffrey'@'localhost'
IDENTIFIED WITH mysql_native_password BY 'password';
In each case, the password value stored in the account row is
the cleartext value
' after
it has been hashed by the authentication plugin associated
with the account.
password'
For additional information about setting passwords and authentication plugins, see Section 6.2.14, “Assigning Account Passwords”, and Section 6.2.17, “Pluggable Authentication”.
The DEFAULT ROLE clause defines which roles
become active when the user connects to the server and
authenticates, or when the user executes the
SET ROLE
DEFAULT statement during a session.
Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
CREATE USER 'joe'@'10.0.0.1' DEFAULT ROLE administrator, developer;
The host name part of the role name, if omitted, defaults to
'%'.
The DEFAULT ROLE clause permits a list of
one or more comma-separated role names. These roles need not
exist at the time CREATE USER
is executed.
MySQL can check X.509 certificate attributes in addition to the usual authentication that is based on the user name and credentials. For background information on the use of SSL/TLS with MySQL, see Section 6.3, “Using Encrypted Connections”.
To specify SSL/TLS-related options for a MySQL account, use a
REQUIRE clause that specifies one or more
tls_option values.
Order of REQUIRE options does not matter,
but no option can be specified twice. The
AND keyword is optional between
REQUIRE options.
CREATE USER permits these
tls_option values:
NONE
Indicates that all accounts named by the statement have no SSL or X.509 requirements. Unencrypted connections are permitted if the user name and password are valid. Encrypted connections can be used, at the client's option, if the client has the proper certificate and key files.
CREATE USER 'jeffrey'@'localhost' REQUIRE NONE;
Clients attempt to establish a secure connection by
default. For clients that have REQUIRE
NONE, the connection attempt falls back to an
unencrypted connection if a secure connection cannot be
established. To require an encrypted connection, a client
need specify only the
--ssl-mode=REQUIRED
option; the connection attempt fails if a secure
connection cannot be established.
NONE is the default if no SSL-related
REQUIRE options are specified.
SSL
Tells the server to permit only encrypted connections for all accounts named by the statement.
CREATE USER 'jeffrey'@'localhost' REQUIRE SSL;
Clients attempt to establish a secure connection by
default. For accounts that have REQUIRE
SSL, the connection attempt fails if a secure
connection cannot be established.
X509
For all accounts named by the statement, requires that
clients present a valid certificate, but the exact
certificate, issuer, and subject do not matter. The only
requirement is that it should be possible to verify its
signature with one of the CA certificates. Use of X.509
certificates always implies encryption, so the
SSL option is unnecessary in this case.
CREATE USER 'jeffrey'@'localhost' REQUIRE X509;
For accounts with REQUIRE X509, clients
must specify the --ssl-key
and --ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.) This is true for ISSUER
and SUBJECT as well because those
REQUIRE options imply the requirements
of X509.
ISSUER
'
issuer'
For all accounts named by the statement, requires that
clients present a valid X.509 certificate issued by CA
'. If
a client presents a certificate that is valid but has a
different issuer, the server rejects the connection. Use
of X.509 certificates always implies encryption, so the
issuer'SSL option is unnecessary in this case.
CREATE USER 'jeffrey'@'localhost'
REQUIRE ISSUER '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL/CN=CA/emailAddress=ca@example.com';
Because ISSUER implies the requirements
of X509, clients must specify the
--ssl-key and
--ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.)
SUBJECT
'
subject'
For all accounts named by the statement, requires that
clients present a valid X.509 certificate containing the
subject subject. If a client
presents a certificate that is valid but has a different
subject, the server rejects the connection. Use of X.509
certificates always implies encryption, so the
SSL option is unnecessary in this case.
CREATE USER 'jeffrey'@'localhost'
REQUIRE SUBJECT '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL demo client certificate/
CN=client/emailAddress=client@example.com';
MySQL does a simple string comparison of the
'
value to the value in the certificate, so lettercase and
component ordering must be given exactly as present in the
certificate.
subject'
Because SUBJECT implies the
requirements of X509, clients must
specify the --ssl-key and
--ssl-cert options to
connect. (It is recommended but not required that
--ssl-ca also be specified
so that the public certificate provided by the server can
be verified.)
CIPHER
'
cipher'
For all accounts named by the statement, requires a specific cipher method for encrypting connections. This option is needed to ensure that ciphers and key lengths of sufficient strength are used. Encryption can be weak if old algorithms using short encryption keys are used.
CREATE USER 'jeffrey'@'localhost' REQUIRE CIPHER 'EDH-RSA-DES-CBC3-SHA';
The SUBJECT, ISSUER, and
CIPHER options can be combined in the
REQUIRE clause:
CREATE USER 'jeffrey'@'localhost'
REQUIRE SUBJECT '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL demo client certificate/
CN=client/emailAddress=client@example.com'
AND ISSUER '/C=SE/ST=Stockholm/L=Stockholm/
O=MySQL/CN=CA/emailAddress=ca@example.com'
AND CIPHER 'EDH-RSA-DES-CBC3-SHA';
It is possible to place limits on use of server resources by
an account, as discussed in Section 6.2.20, “Setting Account Resource Limits”.
To do so, use a WITH clause that specifies
one or more resource_option values.
Order of WITH options does not matter,
except that if a given resource limit is specified multiple
times, the last instance takes precedence.
CREATE USER permits these
resource_option values:
MAX_QUERIES_PER_HOUR
,
countMAX_UPDATES_PER_HOUR
,
countMAX_CONNECTIONS_PER_HOUR
count
For all accounts named by the statement, these options
restrict how many queries, updates, and connections to the
server are permitted to each account during any given
one-hour period. If count is
0 (the default), this means that there
is no limitation for the account.
MAX_USER_CONNECTIONS
count
For all accounts named by the statement, restricts the
maximum number of simultaneous connections to the server
by each account. A nonzero
count specifies the limit for
the account explicitly. If
count is 0
(the default), the server determines the number of
simultaneous connections for the account from the global
value of the
max_user_connections
system variable. If
max_user_connections is
also zero, there is no limit for the account.
Example:
CREATE USER 'jeffrey'@'localhost' WITH MAX_QUERIES_PER_HOUR 500 MAX_UPDATES_PER_HOUR 100;
CREATE USER supports several
password_option values for password
management:
Password expiration options: You can expire an account password manually and establish its password expiration policy. Policy options do not expire the password. Instead, they determine how the server applies automatic expiration to the account based on password age, which is assessed from the date and time of the most recent account password change.
Password reuse options: You can restrict password reuse based on number of password changes, time elapsed, or both.
Password verification-required options: You can indicate whether attempts to change an account password must specify the current password, as verification that the user attempting to make the change actually knows the current password.
Incorrect-password failed-login tracking options: You can cause the server to track failed login attempts and temporarily lock accounts for which too many consecutive incorrect passwords are given. The required number of failures and the lock time are configurable.
This section describes the syntax for password-management options. For information about establishing policy for password management, see Section 6.2.15, “Password Management”.
If multiple password-management options of a given type are
specified, the last one takes precedence. For example,
PASSWORD EXPIRE DEFAULT PASSWORD EXPIRE
NEVER is the same as PASSWORD EXPIRE
NEVER.
Except for the options that pertain to failed-login tracking, password-management options apply only to accounts that use an authentication plugin that stores credentials internally to MySQL. For accounts that use a plugin that performs authentication against a credentials system that is external to MySQL, password management must be handled externally against that system as well. For more information about internal credentials storage, see Section 6.2.15, “Password Management”.
A client has an expired password if the account password was expired manually or the password age is considered greater than its permitted lifetime per the automatic expiration policy. In this case, the server either disconnects the client or restricts the operations permitted to it (see Section 6.2.16, “Server Handling of Expired Passwords”). Operations performed by a restricted client result in an error until the user establishes a new account password.
CREATE USER permits these
password_option values for
controlling password expiration:
PASSWORD EXPIRE
Immediately marks the password expired for all accounts named by the statement.
CREATE USER 'jeffrey'@'localhost' PASSWORD EXPIRE;
PASSWORD EXPIRE DEFAULT
Sets all accounts named by the statement so that the
global expiration policy applies, as specified by the
default_password_lifetime
system variable.
CREATE USER 'jeffrey'@'localhost' PASSWORD EXPIRE DEFAULT;
PASSWORD EXPIRE NEVER
This expiration option overrides the global policy for all accounts named by the statement. For each, it disables password expiration so that the password never expires.
CREATE USER 'jeffrey'@'localhost' PASSWORD EXPIRE NEVER;
PASSWORD EXPIRE INTERVAL
N DAY
This expiration option overrides the global policy for all
accounts named by the statement. For each, it sets the
password lifetime to N days.
The following statement requires the password to be
changed every 180 days:
CREATE USER 'jeffrey'@'localhost' PASSWORD EXPIRE INTERVAL 180 DAY;
CREATE USER permits these
password_option values for
controlling reuse of previous passwords based on required
minimum number of password changes:
PASSWORD HISTORY DEFAULT
Sets all accounts named by the statement so that the
global policy about password history length applies, to
prohibit reuse of passwords before the number of changes
specified by the
password_history system
variable.
CREATE USER 'jeffrey'@'localhost' PASSWORD HISTORY DEFAULT;
PASSWORD HISTORY
N
This history-length option overrides the global policy for
all accounts named by the statement. For each, it sets the
password history length to N
passwords, to prohibit reusing any of the
N most recently chosen
passwords. The following statement prohibits reuse of any
of the previous 6 passwords:
CREATE USER 'jeffrey'@'localhost' PASSWORD HISTORY 6;
CREATE USER permits these
password_option values for
controlling reuse of previous passwords based on time elapsed:
PASSWORD REUSE INTERVAL DEFAULT
Sets all statements named by the account so that the
global policy about time elapsed applies, to prohibit
reuse of passwords newer than the number of days specified
by the
password_reuse_interval
system variable.
CREATE USER 'jeffrey'@'localhost' PASSWORD REUSE INTERVAL DEFAULT;
PASSWORD REUSE INTERVAL
N DAY
This time-elapsed option overrides the global policy for
all accounts named by the statement. For each, it sets the
password reuse interval to N
days, to prohibit reuse of passwords newer than that many
days. The following statement prohibits password reuse for
360 days:
CREATE USER 'jeffrey'@'localhost' PASSWORD REUSE INTERVAL 360 DAY;
CREATE USER permits these
password_option values for
controlling whether attempts to change an account password
must specify the current password, as verification that the
user attempting to make the change actually knows the current
password:
PASSWORD REQUIRE CURRENT
This verification option overrides the global policy for all accounts named by the statement. For each, it requires that password changes specify the current password.
CREATE USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT;
PASSWORD REQUIRE CURRENT OPTIONAL
This verification option overrides the global policy for all accounts named by the statement. For each, it does not require that password changes specify the current password. (The current password may but need not be given.)
CREATE USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT OPTIONAL;
PASSWORD REQUIRE CURRENT DEFAULT
Sets all statements named by the account so that the
global policy about password verification applies, as
specified by the
password_require_current
system variable.
CREATE USER 'jeffrey'@'localhost' PASSWORD REQUIRE CURRENT DEFAULT;
As of MySQL 8.0.19, CREATE USER
permits these password_option
values for controlling failed-login tracking:
FAILED_LOGIN_ATTEMPTS
N
Whether to track account login attempts that specify an
incorrect password. N must be a
number from 0 to 32767. A value of 0 disables failed-login
tracking. Values greater than 0 indicate how many
consecutive password failures cause temporary account
locking (if PASSWORD_LOCK_TIME is also
nonzero).
PASSWORD_LOCK_TIME {
N
| UNBOUNDED}
How long to lock the account after too many consecutive
login attempts provide an incorrect password.
N must be a number from 0 to
32767, or UNBOUNDED. A value of 0
disables temporary account locking. Values greater than 0
indicate how long to lock the account in days. A value of
UNBOUNDED causes the account locking
duration to be unbounded; once locked, the account remains
in a locked state until unlocked. For information about
the conditions under which unlocking occurs, see
Failed-Login Tracking and Temporary Account Locking.
For failed-login tracking and temporary locking to occur, an
account's FAILED_LOGIN_ATTEMPTS and
PASSWORD_LOCK_TIME options both must be
nonzero. The following statement creates an account that
remains locked for two days after four consecutive password
failures:
CREATE USER 'jeffrey'@'localhost' FAILED_LOGIN_ATTEMPTS 4 PASSWORD_LOCK_TIME 2;
MySQL supports account locking and unlocking using the
ACCOUNT LOCK and ACCOUNT
UNLOCK options, which specify the locking state for
an account. For additional discussion, see
Section 6.2.19, “Account Locking”.
If multiple account-locking options are specified, the last one takes precedence.
CREATE USER is written to the
binary log if it succeeds, but not if it fails; in that case,
rollback occurs and no changes are made. A statement written
to the binary log includes all named users. If the IF
NOT EXISTS clause is given, this includes even users
that already exist and were not created.
The statement written to the binary log specifies an authentication plugin for each user, determined as follows:
The plugin named in the original statement, if one was specified.
Otherwise, the default authentication plugin. In
particular, if a user u1 already exists
and uses a nondefault authentication plugin, the statement
written to the binary log for CREATE USER IF NOT
EXISTS u1 names the default authentication
plugin. (If the statement written to the binary log must
specify a nondefault authentication plugin for a user,
include it in the original statement.)
If the server adds the default authentication plugin for any nonexisting users in the statement written to the binary log, it writes a warning to the error log naming those users.
If the original statement specifies the
FAILED_LOGIN_ATTEMPTS or
PASSWORD_LOCK_TIME option, the statement
written to the binary log includes the option.
DROP ROLE [IF EXISTS]role[,role] ...
DROP ROLE removes one or more
roles (named collections of privileges). To use this statement,
you must have the global DROP
ROLE or CREATE USER
privilege. When the read_only
system variable is enabled, DROP
ROLE additionally requires the
CONNECTION_ADMIN privilege (or
the deprecated SUPER privilege).
As of MySQL 8.0.16, users who have the
CREATE USER privilege can use
this statement to drop accounts that are locked or unlocked.
Users who have the DROP ROLE
privilege can use this statement only to drop accounts that are
locked (unlocked accounts are presumably user accounts used to
log in to the server and not just as roles).
Roles named in the
mandatory_roles system variable
value cannot be dropped.
DROP ROLE either succeeds for all
named roles or rolls back and has no effect if any error occurs.
By default, an error occurs if you try to drop a role that does
not exist. If the IF EXISTS clause is given,
the statement produces a warning for each named role that does
not exist, rather than an error.
The statement is written to the binary log if it succeeds, but
not if it fails; in that case, rollback occurs and no changes
are made. A statement written to the binary log includes all
named roles. If the IF EXISTS clause is
given, this includes even roles that do not exist and were not
dropped.
Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
DROP ROLE 'administrator', 'developer'; DROP ROLE 'webapp'@'localhost';
The host name part of the role name, if omitted, defaults to
'%'.
A dropped role is automatically revoked from any user account (or role) to which the role was granted. Within any current session for such an account, its adjusted privileges apply beginning with the next statement executed.
For role usage examples, see Section 6.2.10, “Using Roles”.
DROP USER [IF EXISTS]user[,user] ...
The DROP USER statement removes
one or more MySQL accounts and their privileges. It removes
privilege rows for the account from all grant tables.
Roles named in the
mandatory_roles system variable
value cannot be dropped.
To use DROP USER, you must have
the global CREATE USER privilege,
or the DELETE privilege for the
mysql system schema. When the
read_only system variable is
enabled, DROP USER additionally
requires the CONNECTION_ADMIN
privilege (or the deprecated
SUPER privilege).
DROP USER either succeeds for all
named users or rolls back and has no effect if any error occurs.
By default, an error occurs if you try to drop a user that does
not exist. If the IF EXISTS clause is given,
the statement produces a warning for each named user that does
not exist, rather than an error.
The statement is written to the binary log if it succeeds, but
not if it fails; in that case, rollback occurs and no changes
are made. A statement written to the binary log includes all
named users. If the IF EXISTS clause is
given, this includes even users that do not exist and were not
dropped.
Each account name uses the format described in Section 6.2.4, “Specifying Account Names”. For example:
DROP USER 'jeffrey'@'localhost';
The host name part of the account name, if omitted, defaults to
'%'.
DROP USER does not
automatically close any open user sessions. Rather, in the
event that a user with an open session is dropped, the
statement does not take effect until that user's session is
closed. Once the session is closed, the user is dropped, and
that user's next attempt to log in will fail. This
is by design.
DROP USER does not automatically
drop or invalidate databases or objects within them that the old
user created. This includes stored programs or views for which
the DEFINER attribute names the dropped user.
Attempts to access such objects may produce an error if they
execute in definer security context. (For information about
security context, see
Section 24.6, “Stored Object Access Control”.)
GRANT
priv_type [(column_list)]
[, priv_type [(column_list)]] ...
ON [object_type] priv_level
TO user_or_role [, user_or_role] ...
[WITH GRANT OPTION]
[AS user
[WITH ROLE
DEFAULT
| NONE
| ALL
| ALL EXCEPT role [, role ] ...
| role [, role ] ...
]
]
}
GRANT PROXY ON user_or_role
TO user_or_role [, user_or_role] ...
[WITH GRANT OPTION]
GRANT role [, role] ...
TO user_or_role [, user_or_role] ...
[WITH ADMIN OPTION]
object_type: {
TABLE
| FUNCTION
| PROCEDURE
}
priv_level: {
*
| *.*
| db_name.*
| db_name.tbl_name
| tbl_name
| db_name.routine_name
}
user_or_role: {
user (see Section 6.2.4, “Specifying Account Names”)
| role (see Section 6.2.5, “Specifying Role Names”)
}
The GRANT statement assigns
privileges and roles to MySQL user accounts and roles. There are
several aspects to the GRANT
statement, described under the following topics:
The GRANT statement enables
system administrators to grant privileges and roles, which can
be granted to user accounts and roles. These syntax
restrictions apply:
GRANT cannot mix granting
both privileges and roles in the same statement. A given
GRANT statement must grant
either privileges or roles.
The ON clause distinguishes whether the
statement grants privileges or roles:
With ON, the statement grants
privileges.
Without ON, the statement grants
roles.
It is permitted to assign both privileges and roles to
an account, but you must use separate
GRANT statements, each
with syntax appropriate to what is to be granted.
For more information about roles, see Section 6.2.10, “Using Roles”.
To grant a privilege with
GRANT, you must have the
GRANT OPTION privilege, and you
must have the privileges that you are granting.
(Alternatively, if you have the
UPDATE privilege for the grant
tables in the mysql system schema, you can
grant any account any privilege.) When the
read_only system variable is
enabled, GRANT additionally
requires the CONNECTION_ADMIN
privilege (or the deprecated
SUPER privilege).
GRANT either succeeds for all
named users and roles or rolls back and has no effect if any
error occurs. The statement is written to the binary log only
if it succeeds for all named users and roles.
The REVOKE statement is related
to GRANT and enables
administrators to remove account privileges. See
Section 13.7.1.8, “REVOKE Statement”.
Each account name uses the format described in Section 6.2.4, “Specifying Account Names”. Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
GRANT ALL ON db1.* TO 'jeffrey'@'localhost'; GRANT 'role1', 'role2' TO 'user1'@'localhost', 'user2'@'localhost'; GRANT SELECT ON world.* TO 'role3';
The host name part of the account or role name, if omitted,
defaults to '%'.
Normally, a database administrator first uses
CREATE USER to create an
account and define its nonprivilege characteristics such as
its password, whether it uses secure connections, and limits
on access to server resources, then uses
GRANT to define its privileges.
ALTER USER may be used to
change the nonprivilege characteristics of existing accounts.
For example:
CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON db1.* TO 'jeffrey'@'localhost';
GRANT SELECT ON db2.invoice TO 'jeffrey'@'localhost';
ALTER USER 'jeffrey'@'localhost' WITH MAX_QUERIES_PER_HOUR 90;
From the mysql program,
GRANT responds with
Query OK, 0 rows affected when executed
successfully. To determine what privileges result from the
operation, use SHOW GRANTS. See
Section 13.7.7.21, “SHOW GRANTS Statement”.
Under some circumstances,
GRANT may be recorded in
server logs or on the client side in a history file such as
~/.mysql_history, which means that
cleartext passwords may be read by anyone having read access
to that information. For information about the conditions
under which this occurs for the server logs and how to
control it, see Section 6.1.2.3, “Passwords and Logging”. For
similar information about client-side logging, see
Section 4.5.1.3, “mysql Client Logging”.
GRANT supports host names up to
255 characters long (60 characters prior to MySQL 8.0.17).
User names can be up to 32 characters. Database, table,
column, and routine names can be up to 64 characters.
Do not attempt to change the permissible length
for user names by altering the mysql.user
system table. Doing so results in unpredictable behavior
which may even make it impossible for users to log in to the
MySQL server. Never alter the structure of tables
in the mysql system schema in any manner
except by means of the procedure described in
Section 2.11, “Upgrading MySQL”.
Several objects within GRANT
statements are subject to quoting, although quoting is
optional in many cases: Account, role, database, table,
column, and routine names. For example, if a
user_name or
host_name value in an account name
is legal as an unquoted identifier, you need not quote it.
However, quotation marks are necessary to specify a
user_name string containing special
characters (such as -), or a
host_name string containing special
characters or wildcard characters such as %
(for example, 'test-user'@'%.com'). Quote
the user name and host name separately.
To specify quoted values:
Quote database, table, column, and routine names as identifiers.
Quote user names and host names as identifiers or as strings.
Quote passwords as strings.
For string-quoting and identifier-quoting guidelines, see Section 9.1.1, “String Literals”, and Section 9.2, “Schema Object Names”.
The _ and % wildcards
are permitted when specifying database names in
GRANT statements that grant
privileges at the database level (GRANT ... ON
). This means,
for example, that to use a db_name.*_ character as
part of a database name, specify it as \_
in the GRANT statement, to
prevent the user from being able to access additional
databases matching the wildcard pattern (for example,
GRANT ... ON `foo\_bar`.* TO ...).
When a database name not is used to grant privileges at the
database level, but as a qualifier for granting privileges to
some other object such as a table or routine (for example,
GRANT ... ON
),
wildcard characters are treated as normal characters.
db_name.tbl_name
A user value in a
GRANT statement indicates a
MySQL account to which the statement applies. To accommodate
granting rights to users from arbitrary hosts, MySQL supports
specifying the user value in the
form
'.
user_name'@'host_name'
You can specify wildcards in the host name. For example,
'
applies to user_name'@'%.example.com'user_name for any host
in the example.com domain, and
'
applies to user_name'@'198.51.100.%'user_name for any host
in the 198.51.100 class C subnet.
The simple form
' is a
synonym for
user_name''.
user_name'@'%'
MySQL does not support wildcards in user
names. To refer to an anonymous user, specify an
account with an empty user name with the
GRANT statement:
GRANT ALL ON test.* TO ''@'localhost' ...;
In this case, any user who connects from the local host with the correct password for the anonymous user will be permitted access, with the privileges associated with the anonymous-user account.
For additional information about user name and host name values in account names, see Section 6.2.4, “Specifying Account Names”.
If you permit local anonymous users to connect to the MySQL
server, you should also grant privileges to all local users
as
'.
Otherwise, the anonymous user account for
user_name'@'localhost'localhost in the
mysql.user system table is used when
named users try to log in to the MySQL server from the local
machine. For details, see
Section 6.2.6, “Access Control, Stage 1: Connection Verification”.
To determine whether this issue applies to you, execute the following query, which lists any anonymous users:
SELECT Host, User FROM mysql.user WHERE User='';
To avoid the problem just described, delete the local anonymous user account using this statement:
DROP USER ''@'localhost';
The following tables summarize the permissible static and
dynamic priv_type privilege types
that can be specified for the
GRANT and
REVOKE statements, and the
levels at which each privilege can be granted. For additional
information about each privilege, see
Section 6.2.2, “Privileges Provided by MySQL”. For information about
the differences between static and dynamic privileges, see
Static Versus Dynamic Privileges.
Table 13.11 Permissible Static Privileges for GRANT and REVOKE
| Privilege | Meaning and Grantable Levels |
|---|---|
ALL [PRIVILEGES] |
Grant all privileges at specified access level except
GRANT OPTION and
PROXY. |
ALTER |
Enable use of ALTER TABLE. Levels:
Global, database, table. |
ALTER ROUTINE |
Enable stored routines to be altered or dropped. Levels: Global, database, routine. |
CREATE |
Enable database and table creation. Levels: Global, database, table. |
CREATE ROLE |
Enable role creation. Level: Global. |
CREATE ROUTINE |
Enable stored routine creation. Levels: Global, database. |
CREATE TABLESPACE |
Enable tablespaces and log file groups to be created, altered, or dropped. Level: Global. |
CREATE TEMPORARY TABLES |
Enable use of CREATE
TEMPORARY TABLE. Levels: Global, database. |
CREATE USER |
Enable use of CREATE USER,
DROP USER,
RENAME USER, and
REVOKE ALL
PRIVILEGES. Level: Global. |
CREATE VIEW |
Enable views to be created or altered. Levels: Global, database, table. |
DELETE |
Enable use of DELETE. Level: Global,
database, table. |
DROP |
Enable databases, tables, and views to be dropped. Levels: Global, database, table. |
DROP ROLE |
Enable roles to be dropped. Level: Global. |
EVENT |
Enable use of events for the Event Scheduler. Levels: Global, database. |
EXECUTE |
Enable the user to execute stored routines. Levels: Global, database, routine. |
FILE |
Enable the user to cause the server to read or write files. Level: Global. |
GRANT OPTION |
Enable privileges to be granted to or removed from other accounts. Levels: Global, database, table, routine, proxy. |
INDEX |
Enable indexes to be created or dropped. Levels: Global, database, table. |
INSERT |
Enable use of INSERT. Levels: Global,
database, table, column. |
LOCK TABLES |
Enable use of LOCK TABLES on tables for
which you have the SELECT
privilege. Levels: Global, database. |
PROCESS |
Enable the user to see all processes with SHOW
PROCESSLIST. Level: Global. |
PROXY |
Enable user proxying. Level: From user to user. |
REFERENCES |
Enable foreign key creation. Levels: Global, database, table, column. |
RELOAD |
Enable use of FLUSH operations. Level:
Global. |
REPLICATION CLIENT |
Enable the user to ask where master or slave servers are. Level: Global. |
REPLICATION SLAVE |
Enable replication slaves to read binary log events from the master. Level: Global. |
SELECT |
Enable use of SELECT. Levels: Global,
database, table, column. |
SHOW DATABASES |
Enable SHOW DATABASES to show all
databases. Level: Global. |
SHOW VIEW |
Enable use of SHOW CREATE VIEW. Levels:
Global, database, table. |
SHUTDOWN |
Enable use of mysqladmin shutdown. Level: Global. |
SUPER |
Enable use of other administrative operations such as
CHANGE MASTER TO,
KILL,
PURGE BINARY LOGS,
SET
GLOBAL, and mysqladmin
debug command. Level: Global. |
TRIGGER |
Enable trigger operations. Levels: Global, database, table. |
UPDATE |
Enable use of UPDATE. Levels: Global,
database, table, column. |
USAGE |
Synonym for “no privileges” |
Table 13.12 Permissible Dynamic Privileges for GRANT and REVOKE
| Privilege | Meaning and Grantable Levels |
|---|---|
APPLICATION_PASSWORD_ADMIN |
Enable dual password administration. Level: Global. |
AUDIT_ADMIN |
Enable audit log configuration. Level: Global. |
BACKUP_ADMIN |
Enable backup administration. Level: Global. |
BINLOG_ADMIN |
Enable binary log control. Level: Global. |
BINLOG_ENCRYPTION_ADMIN |
Enable activation and deactivation of binary log encryption. Level: Global. |
CLONE_ADMIN |
Enable clone administration. Level: Global. |
CONNECTION_ADMIN |
Enable connection limit/restriction control. Level: Global. |
ENCRYPTION_KEY_ADMIN |
Enable InnoDB key rotation. Level: Global. |
FIREWALL_ADMIN |
Enable firewall rule administration, any user. Level: Global. |
FIREWALL_USER |
Enable firewall rule administration, self. Level: Global. |
GROUP_REPLICATION_ADMIN |
Enable Group Replication control. Level: Global. |
INNODB_REDO_LOG_ARCHIVE |
Enable redo log archiving administration. Level: Global. |
NDB_STORED_USER |
Enable sharing of user or role between SQL nodes (NDB Cluster). Level: Global. |
PERSIST_RO_VARIABLES_ADMIN |
Enable persisting read-only system variables. Level: Global. |
REPLICATION_APPLIER |
Act as the PRIVILEGE_CHECKS_USER for a replication
channel. Level: Global. |
REPLICATION_SLAVE_ADMIN |
Enable regular replication control. Level: Global. |
RESOURCE_GROUP_ADMIN |
Enable resource group administration. Level: Global. |
RESOURCE_GROUP_USER |
Enable resource group administration. Level: Global. |
ROLE_ADMIN |
Enable roles to be granted or revoked, use of WITH ADMIN
OPTION. Level: Global. |
SESSION_VARIABLES_ADMIN |
Enable setting restricted session system variables. Level: Global. |
SET_USER_ID |
Enable setting non-self DEFINER values. Level:
Global. |
SHOW_ROUTINE |
Enable access to stored routine definitions. Level: Global. |
SYSTEM_USER |
Designate account as system account. Level: Global. |
SYSTEM_VARIABLES_ADMIN |
Enable modifying or persisting global system variables. Level: Global. |
TABLE_ENCRYPTION_ADMIN |
Enable overriding default encryption settings. Level: Global. |
VERSION_TOKEN_ADMIN |
Enable use of Version Tokens UDFs. Level: Global. |
XA_RECOVER_ADMIN |
Enable XA
RECOVER execution. Level: Global. |
A trigger is associated with a table. To create or drop a
trigger, you must have the
TRIGGER privilege for the
table, not the trigger.
In GRANT statements, the
ALL
[PRIVILEGES] or PROXY
privilege must be named by itself and cannot be specified
along with other privileges.
ALL
[PRIVILEGES] stands for all privileges available for
the level at which privileges are to be granted except for the
GRANT OPTION and
PROXY privileges.
MySQL account information is stored in the tables of the
mysql system schema. For additional
details, consult Section 6.2, “Access Control and Account Management”, which
discusses the mysql system schema and the
access control system extensively.
If the grant tables hold privilege rows that contain
mixed-case database or table names and the
lower_case_table_names system
variable is set to a nonzero value,
REVOKE cannot be used to revoke
these privileges. It will be necessary to manipulate the grant
tables directly. (GRANT will
not create such rows when
lower_case_table_names is
set, but such rows might have been created prior to setting
that variable. The
lower_case_table_names
setting can only be configured at server startup.)
Privileges can be granted at several levels, depending on the
syntax used for the ON clause. For
REVOKE, the same
ON syntax specifies which privileges to
remove.
For the global, database, table, and routine levels,
GRANT ALL
assigns only the privileges that exist at the level you are
granting. For example, GRANT ALL ON
is a
database-level statement, so it does not grant any global-only
privileges such as db_name.*FILE.
Granting ALL does not assign
the GRANT OPTION or
PROXY privilege.
The object_type clause, if present,
should be specified as TABLE,
FUNCTION, or PROCEDURE
when the following object is a table, a stored function, or a
stored procedure.
The privileges that a user holds for a database, table,
column, or routine are formed additively as the logical
OR of the account privileges at
each of the privilege levels, including the global level. It
is not possible to deny a privilege granted at a higher level
by absence of that privilege at a lower level. For example,
this statement grants the
SELECT and
INSERT privileges globally:
GRANT SELECT, INSERT ON *.* TO u1;
The globally granted privileges apply to all databases, tables, and columns, even though not granted at any of those lower levels.
As of MySQL 8.0.16, it is possible to explicitly deny a
privilege granted at the global level by revoking it for
particular databases, if the
partial_revokes system
variable is enabled:
GRANT SELECT, INSERT, UPDATE ON *.* TO u1; REVOKE INSERT, UPDATE ON db1.* FROM u1;
The result of the preceding statements is that
SELECT applies globally to all
tables, whereas INSERT and
UPDATE apply globally except to
tables in db1. Account access to
db1 is read only.
Details of the privilege-checking procedure are presented in Section 6.2.7, “Access Control, Stage 2: Request Verification”.
If you are using table, column, or routine privileges for even one user, the server examines table, column, and routine privileges for all users and this slows down MySQL a bit. Similarly, if you limit the number of queries, updates, or connections for any users, the server must monitor these values.
MySQL enables you to grant privileges on databases or tables
that do not exist. For tables, the privileges to be granted
must include the CREATE
privilege. This behavior is by design,
and is intended to enable the database administrator to
prepare user accounts and privileges for databases or tables
that are to be created at a later time.
MySQL does not automatically revoke any privileges when you drop a database or table. However, if you drop a routine, any routine-level privileges granted for that routine are revoked.
Global privileges are administrative or apply to all databases
on a given server. To assign global privileges, use
ON *.* syntax:
GRANT ALL ON *.* TO 'someuser'@'somehost'; GRANT SELECT, INSERT ON *.* TO 'someuser'@'somehost';
The CREATE TABLESPACE,
CREATE USER,
FILE,
PROCESS,
RELOAD,
REPLICATION CLIENT,
REPLICATION SLAVE,
SHOW DATABASES,
SHUTDOWN, and
SUPER static privileges are
administrative and can only be granted globally.
Dynamic privileges are all global and can only be granted globally.
Other privileges can be granted globally or at more specific levels.
The effect of GRANT OPTION
granted at the global level differs for static and dynamic
privileges:
GRANT OPTION granted for
any static global privilege applies to all static global
privileges.
GRANT OPTION granted for
any dynamic privilege applies only to that dynamic
privilege.
GRANT ALL at the global level grants all
static global privileges and all currently registered dynamic
privileges. A dynamic privilege registered subsequent to
execution of the GRANT statement is not
granted retroactively to any account.
MySQL stores global privileges in the
mysql.user system table.
Database privileges apply to all objects in a given database.
To assign database-level privileges, use ON
syntax:
db_name.*
GRANT ALL ON mydb.* TO 'someuser'@'somehost'; GRANT SELECT, INSERT ON mydb.* TO 'someuser'@'somehost';
If you use ON * syntax (rather than
ON *.*), privileges are assigned at the
database level for the default database. An error occurs if
there is no default database.
The CREATE,
DROP,
EVENT,
GRANT OPTION,
LOCK TABLES, and
REFERENCES privileges can be
specified at the database level. Table or routine privileges
also can be specified at the database level, in which case
they apply to all tables or routines in the database.
MySQL stores database privileges in the
mysql.db system table.
Table privileges apply to all columns in a given table. To
assign table-level privileges, use ON
syntax:
db_name.tbl_name
GRANT ALL ON mydb.mytbl TO 'someuser'@'somehost'; GRANT SELECT, INSERT ON mydb.mytbl TO 'someuser'@'somehost';
If you specify tbl_name rather than
db_name.tbl_name, the statement
applies to tbl_name in the default
database. An error occurs if there is no default database.
The permissible priv_type values at
the table level are ALTER,
CREATE VIEW,
CREATE,
DELETE,
DROP,
GRANT OPTION,
INDEX,
INSERT,
REFERENCES,
SELECT,
SHOW VIEW,
TRIGGER, and
UPDATE.
Table-level privileges apply to base tables and views. They do
not apply to tables created with CREATE
TEMPORARY TABLE, even if the table names match. For
information about TEMPORARY table
privileges, see Section 13.1.20.2, “CREATE TEMPORARY TABLE Statement”.
MySQL stores table privileges in the
mysql.tables_priv system table.
Column privileges apply to single columns in a given table. Each privilege to be granted at the column level must be followed by the column or columns, enclosed within parentheses.
GRANT SELECT (col1), INSERT (col1, col2) ON mydb.mytbl TO 'someuser'@'somehost';
The permissible priv_type values
for a column (that is, when you use a
column_list clause) are
INSERT,
REFERENCES,
SELECT, and
UPDATE.
MySQL stores column privileges in the
mysql.columns_priv system table.
The ALTER ROUTINE,
CREATE ROUTINE,
EXECUTE, and
GRANT OPTION privileges apply
to stored routines (procedures and functions). They can be
granted at the global and database levels. Except for
CREATE ROUTINE, these
privileges can be granted at the routine level for individual
routines.
GRANT CREATE ROUTINE ON mydb.* TO 'someuser'@'somehost'; GRANT EXECUTE ON PROCEDURE mydb.myproc TO 'someuser'@'somehost';
The permissible priv_type values at
the routine level are ALTER
ROUTINE, EXECUTE, and
GRANT OPTION.
CREATE ROUTINE is not a
routine-level privilege because you must have the privilege at
the global or database level to create a routine in the first
place.
MySQL stores routine-level privileges in the
mysql.procs_priv system table.
The PROXY privilege enables one
user to be a proxy for another. The proxy user impersonates or
takes the identity of the proxied user; that is, it assumes
the privileges of the proxied user.
GRANT PROXY ON 'localuser'@'localhost' TO 'externaluser'@'somehost';
When PROXY is granted, it must
be the only privilege named in the
GRANT statement, and the only
permitted WITH option is WITH
GRANT OPTION.
Proxying requires that the proxy user authenticate through a
plugin that returns the name of the proxied user to the server
when the proxy user connects, and that the proxy user have the
PROXY privilege for the proxied user. For
details and examples, see Section 6.2.18, “Proxy Users”.
MySQL stores proxy privileges in the
mysql.proxies_priv system table.
GRANT syntax without an
ON clause grants roles rather than
individual privileges. A role is a named collection of
privileges; see Section 6.2.10, “Using Roles”. For example:
GRANT 'role1', 'role2' TO 'user1'@'localhost', 'user2'@'localhost';
Each role to be granted must exist, as well as each user account or role to which it is to be granted. As of MySQL 8.0.16, roles cannot be granted to anonymous users.
Granting a role does not automatically cause the role to be active. For information about role activation and inactivation, see Activating Roles.
These privileges are required to grant roles:
If you have the ROLE_ADMIN
privilege (or the deprecated
SUPER privilege), you can
grant or revoke any role to users or roles.
If you were granted a role with a
GRANT statement that
includes the WITH ADMIN OPTION clause,
you become able to grant that role to other users or
roles, or revoke it from other users or roles, as long as
the role is active at such time as you subsequently grant
or revoke it. This includes the ability to use
WITH ADMIN OPTION itself.
To grant a role that has the
SYSTEM_USER privilege, you
must have the SYSTEM_USER
privilege.
It is possible to create circular references with
GRANT. For example:
CREATE USER 'u1', 'u2'; CREATE ROLE 'r1', 'r2'; GRANT 'u1' TO 'u1'; -- simple loop: u1 => u1 GRANT 'r1' TO 'r1'; -- simple loop: r1 => r1 GRANT 'r2' TO 'u2'; GRANT 'u2' TO 'r2'; -- mixed user/role loop: u2 => r2 => u2
Circular grant references are permitted but add no new privileges or roles to the grantee because a user or role already has its privileges and roles.
As of MySQL 8.0.16, GRANT has
an AS clause that specifies additional information
about the privilege context to use for statement execution.
This syntax is visible at the SQL level, although its primary
purpose is to enable uniform replication across all nodes of
grantor privilege restrictions imposed by partial revokes, by
causing those restrictions to appear in the binary log. For
information about partial revokes, see
Section 6.2.12, “Privilege Restriction Using Partial Revokes”.
user [WITH
ROLE]
When the AS
clause is specified, statement execution takes into account
any privilege restrictions associated with the named user,
including all roles specified by userWITH ROLE,
if present. The result is that the privileges actually granted
by the statement may be reduced relative to those specified.
These conditions apply to the AS
clause:
user
AS has an effect only when the named
user has privilege restrictions
(which implies that the
partial_revokes system
variable is enabled).
If WITH ROLE is given, all roles named
must be granted to the named
user.
The named user should be a
MySQL account specified as
',
user_name'@'host_name'CURRENT_USER, or
CURRENT_USER(). The current
user may be named together with WITH
ROLE for the case that the executing user wants
GRANT to execute with a set
of roles applied that may differ from the roles active
within the current session.
AS cannot be used to gain privileges
not possessed by the user who executes the
GRANT statement. The
executing user must have at least the privileges to be
granted, but the AS clause can only
restrict the privileges granted, not escalate them.
With respect to the privileges to be granted,
AS cannot specify a user/role
combination that has more privileges (fewer restrictions)
than the user who executes the
GRANT statement. The
AS user/role combination is permitted
to have more privileges than the executing user, but only
if the statement does not grant those additional
privileges.
AS is supported only for granting
global privileges (ON *.*).
AS is not supported for
PROXY grants.
The following example illustrates the effect of the
AS clause. Create a user
u1 that has some global privileges, as well
as restrictions on those privileges:
CREATE USER u1; GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO u1; REVOKE INSERT, UPDATE ON schema1.* FROM u1; REVOKE SELECT ON schema2.* FROM u1;
Also create a role r1 that lifts some of
the privilege restrictions and grant the role to
u1:
CREATE ROLE r1; GRANT INSERT ON schema1.* TO r1; GRANT SELECT ON schema2.* TO r1; GRANT r1 TO u1;
Now, using an account that has no privilege restrictions of
its own, grant to multiple users the same set of global
privileges, but each with different restrictions imposed by
the AS clause, and check which privileges
are actually granted.
The GRANT statement here
has no AS clause, so the privileges
granted are exactly those specified:
mysql>CREATE USER u2;mysql>GRANT SELECT, INSERT, UPDATE ON *.* TO u2;mysql>SHOW GRANTS FOR u2;+-------------------------------------------------+ | Grants for u2@% | +-------------------------------------------------+ | GRANT SELECT, INSERT, UPDATE ON *.* TO `u2`@`%` | +-------------------------------------------------+
The GRANT statement here
has an AS clause, so the privileges
granted are those specified but with the restrictions from
u1 applied:
mysql>CREATE USER u3;mysql>GRANT SELECT, INSERT, UPDATE ON *.* TO u3 AS u1;mysql>SHOW GRANTS FOR u3;+----------------------------------------------------+ | Grants for u3@% | +----------------------------------------------------+ | GRANT SELECT, INSERT, UPDATE ON *.* TO `u3`@`%` | | REVOKE INSERT, UPDATE ON `schema1`.* FROM `u3`@`%` | | REVOKE SELECT ON `schema2`.* FROM `u3`@`%` | +----------------------------------------------------+
As mentioned previously, the AS clause
can only add privilege restrictions; it cannot escalate
privileges. Thus, although u1 has the
DELETE privilege, that is
not included in the privileges granted because the
statement does not specify granting
DELETE.
The AS clause for the
GRANT statement here makes
the role r1 active for
u1. That role lifts some of the
restrictions on u1. Consequently, the
privileges granted have some restrictions, but not so many
as for the previous GRANT
statement:
mysql>CREATE USER u4;mysql>GRANT SELECT, INSERT, UPDATE ON *.* TO u4 AS u1 WITH ROLE r1;mysql>SHOW GRANTS FOR u4;+-------------------------------------------------+ | Grants for u4@% | +-------------------------------------------------+ | GRANT SELECT, INSERT, UPDATE ON *.* TO `u4`@`%` | | REVOKE UPDATE ON `schema1`.* FROM `u4`@`%` | +-------------------------------------------------+
If a GRANT statement includes
an AS
clause, privilege restrictions on the user who executes the
statement are ignored (rather than applied as they would be in
the absence of an userAS clause).
The optional WITH clause is used to enable
a user to grant privileges to other users. The WITH
GRANT OPTION clause gives the user the ability to
give to other users any privileges the user has at the
specified privilege level.
To grant the GRANT OPTION
privilege to an account without otherwise changing its
privileges, do this:
GRANT USAGE ON *.* TO 'someuser'@'somehost' WITH GRANT OPTION;
Be careful to whom you give the GRANT
OPTION privilege because two users with different
privileges may be able to combine privileges!
You cannot grant another user a privilege which you yourself
do not have; the GRANT OPTION
privilege enables you to assign only those privileges which
you yourself possess.
Be aware that when you grant a user the
GRANT OPTION privilege at a
particular privilege level, any privileges the user possesses
(or may be given in the future) at that level can also be
granted by that user to other users. Suppose that you grant a
user the INSERT privilege on a
database. If you then grant the
SELECT privilege on the
database and specify WITH GRANT OPTION,
that user can give to other users not only the
SELECT privilege, but also
INSERT. If you then grant the
UPDATE privilege to the user on
the database, the user can grant
INSERT,
SELECT, and
UPDATE.
For a nonadministrative user, you should not grant the
ALTER privilege globally or for
the mysql system schema. If you do that,
the user can try to subvert the privilege system by renaming
tables!
For additional information about security risks associated with particular privileges, see Section 6.2.2, “Privileges Provided by MySQL”.
The biggest differences between the MySQL and standard SQL
versions of GRANT are:
MySQL associates privileges with the combination of a host name and user name and not with only a user name.
Standard SQL does not have global or database-level privileges, nor does it support all the privilege types that MySQL supports.
MySQL does not support the standard SQL
UNDER privilege.
Standard SQL privileges are structured in a hierarchical
manner. If you remove a user, all privileges the user has
been granted are revoked. This is also true in MySQL if
you use DROP USER. See
Section 13.7.1.5, “DROP USER Statement”.
In standard SQL, when you drop a table, all privileges for
the table are revoked. In standard SQL, when you revoke a
privilege, all privileges that were granted based on that
privilege are also revoked. In MySQL, privileges can be
dropped with DROP USER or
REVOKE statements.
In MySQL, it is possible to have the
INSERT privilege for only
some of the columns in a table. In this case, you can
still execute INSERT
statements on the table, provided that you insert values
only for those columns for which you have the
INSERT privilege. The
omitted columns are set to their implicit default values
if strict SQL mode is not enabled. In strict mode, the
statement is rejected if any of the omitted columns have
no default value. (Standard SQL requires you to have the
INSERT privilege on all
columns.) For information about strict SQL mode and
implicit default values, see Section 5.1.11, “Server SQL Modes”,
and Section 11.6, “Data Type Default Values”.
RENAME USERold_userTOnew_user[,old_userTOnew_user] ...
The RENAME USER statement renames
existing MySQL accounts. An error occurs for old accounts that
do not exist or new accounts that already exist.
To use RENAME USER, you must have
the global CREATE USER privilege,
or the UPDATE privilege for the
mysql system schema. When the
read_only system variable is
enabled, RENAME USER additionally
requires the CONNECTION_ADMIN
privilege (or the deprecated
SUPER privilege).
Each account name uses the format described in Section 6.2.4, “Specifying Account Names”. For example:
RENAME USER 'jeffrey'@'localhost' TO 'jeff'@'127.0.0.1';
The host name part of the account name, if omitted, defaults to
'%'.
RENAME USER causes the privileges
held by the old user to be those held by the new user. However,
RENAME USER does not
automatically drop or invalidate databases or objects within
them that the old user created. This includes stored programs or
views for which the DEFINER attribute names
the old user. Attempts to access such objects may produce an
error if they execute in definer security context. (For
information about security context, see
Section 24.6, “Stored Object Access Control”.)
The privilege changes take effect as indicated in Section 6.2.13, “When Privilege Changes Take Effect”.
REVOKE
priv_type [(column_list)]
[, priv_type [(column_list)]] ...
ON [object_type] priv_level
FROM user_or_role [, user_or_role] ...
REVOKE ALL [PRIVILEGES], GRANT OPTION
FROM user_or_role [, user_or_role] ...
REVOKE PROXY ON user_or_role
FROM user_or_role [, user_or_role] ...
REVOKE role [, role ] ...
FROM user_or_role [, user_or_role ] ...
user_or_role: {
user (see Section 6.2.4, “Specifying Account Names”)
| role (see Section 6.2.5, “Specifying Role Names”.
}
The REVOKE statement enables
system administrators to revoke privileges and roles, which can
be revoked from user accounts and roles.
For details on the levels at which privileges exist, the
permissible priv_type,
priv_level, and
object_type values, and the syntax
for specifying users and passwords, see Section 13.7.1.6, “GRANT Statement”.
For information about roles, see Section 6.2.10, “Using Roles”.
When the read_only system
variable is enabled, REVOKE
requires the CONNECTION_ADMIN or
privilege (or the deprecated
SUPER privilege), in addition to
any other required privileges described in the following
discussion.
REVOKE either succeeds for all
named users and roles or rolls back and has no effect if any
error occurs. The statement is written to the binary log only if
it succeeds for all named users and roles.
Each account name uses the format described in Section 6.2.4, “Specifying Account Names”. Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
REVOKE INSERT ON *.* FROM 'jeffrey'@'localhost'; REVOKE 'role1', 'role2' FROM 'user1'@'localhost', 'user2'@'localhost'; REVOKE SELECT ON world.* FROM 'role3';
The host name part of the account or role name, if omitted,
defaults to '%'.
To use the first REVOKE syntax,
you must have the GRANT OPTION
privilege, and you must have the privileges that you are
revoking.
To revoke all privileges, use the second syntax, which drops all global, database, table, column, and routine privileges for the named users or roles:
REVOKE ALL PRIVILEGES, GRANT OPTION FROMuser_or_role[,user_or_role] ...
REVOKE ALL PRIVILEGES, GRANT OPTION does not
revoke any roles.
To use this REVOKE syntax, you
must have the global CREATE USER
privilege, or the UPDATE
privilege for the mysql system schema.
The syntax for which the REVOKE
keyword is followed by one or more role names takes a
FROM clause indicating one or more users or
roles from which to revoke the roles.
Roles named in the
mandatory_roles system variable
value cannot be revoked.
A revoked role immediately affects any user account from which it was revoked, such that within any current session for the account, its privileges are adjusted for the next statement executed.
Revoking a role revokes the role itself, not the privileges that
it represents. Suppose that an account is granted a role that
includes a given privilege, and is also granted the privilege
explicitly or another role that includes the privilege. In this
case, the account still possesses that privilege if the first
role is revoked. For example, if an account is granted two roles
that each include SELECT, the
account still can select after either role is revoked.
REVOKE ALL ON *.* (at the global level)
revokes all granted static global privileges and all granted
dynamic privileges.
User accounts and roles from which privileges and roles are to be revoked must exist, but the privileges and roles to be revoked need not be currently granted to them.
A revoked privilege that is granted but not known to the server is revoked with a warning. This situtation can occur for dynamic privileges. For example, a dynamic privilege can be granted while the component that registers it is installed, but if that component is subsequently uninstalled, the privilege becomes unregistered, although accounts that possess the privilege still possess it and it can be revoked from them.
REVOKE removes privileges, but
does not remove rows from the mysql.user
system table. To remove a user account entirely, use
DROP USER. See
Section 13.7.1.5, “DROP USER Statement”.
If the grant tables hold privilege rows that contain mixed-case
database or table names and the
lower_case_table_names system
variable is set to a nonzero value,
REVOKE cannot be used to revoke
these privileges. It will be necessary to manipulate the grant
tables directly. (GRANT will not
create such rows when
lower_case_table_names is set,
but such rows might have been created prior to setting the
variable. The
lower_case_table_names setting
can only be configured when initializing the server.)
When successfully executed from the mysql
program, REVOKE responds with
Query OK, 0 rows affected. To determine what
privileges remain after the operation, use
SHOW GRANTS. See
Section 13.7.7.21, “SHOW GRANTS Statement”.
SET DEFAULT ROLE
{NONE | ALL | role [, role ] ...}
TO user [, user ] ...
For each user named immediately after
the TO keyword, this statement defines which
roles become active when the user connects to the server and
authenticates, or when the user executes the
SET ROLE
DEFAULT statement during a session.
SET DEFAULT ROLE is alternative
syntax for ALTER
USER ... DEFAULT ROLE (see
Section 13.7.1.1, “ALTER USER Statement”). However,
ALTER USER can set the default
for only a single user, whereas SET DEFAULT
ROLE can set the default for multiple users. On the
other hand, you can specify CURRENT_USER as
the user name for the ALTER USER
statement, whereas you cannot for SET
DEFAULT ROLE.
SET DEFAULT ROLE requires these
privileges:
Setting the default roles for another user requires the
global CREATE USER privilege,
or the UPDATE privilege for
the mysql.default_roles system table.
Setting the default roles for yourself requires no special privileges, as long as the roles you want as the default have been granted to you.
Each role name uses the format described in Section 6.2.5, “Specifying Role Names”. For example:
SET DEFAULT ROLE administrator, developer TO 'joe'@'10.0.0.1';
The host name part of the role name, if omitted, defaults to
'%'.
The clause following the DEFAULT ROLE
keywords permits these values:
NONE: Set the default to
NONE (no roles).
ALL: Set the default to all roles granted
to the account.
: Set the
default to the named roles, which must exist and be granted
to the account at the time role [,
role ] ...SET DEFAULT
ROLE is executed.
SET DEFAULT ROLE and
SET ROLE
DEFAULT are different statements:
SET DEFAULT ROLE defines
which account roles to activate by default within account
sessions.
SET ROLE
DEFAULT sets the active roles within the current
session to the current account default roles.
For role usage examples, see Section 6.2.10, “Using Roles”.
SET PASSWORD [FORuser]auth_option[REPLACE 'current_auth_string'] [RETAIN CURRENT PASSWORD]auth_option: { = 'auth_string' | TO RANDOM }
The SET PASSWORD statement
assigns a password to a MySQL user account. The password may be
either explicitly specified in the statement or randomly
generated by MySQL. The statement may also include a
password-verification clause that specifies the account current
password to be replaced, and a clause that manages whether an
account has a secondary password.
' and
auth_string''
each represent a cleartext (unencrypted) password.
current_auth_string'
Rather than using SET PASSWORD
to assign passwords, ALTER USER
is the preferred statement for account alterations, including
assigning passwords. For example:
ALTER USERuserIDENTIFIED BY 'auth_string';
Clauses for random password generation, password verification, and secondary passwords apply only to accounts that use an authentication plugin that stores credentials internally to MySQL. For accounts that use a plugin that performs authentication against a credentials system that is external to MySQL, password management must be handled externally against that system as well. For more information about internal credentials storage, see Section 6.2.15, “Password Management”.
The REPLACE
'
clause performs password verification and is available as of
MySQL 8.0.13. If given:
current_auth_string'
REPLACE specifies the account current
password to be replaced, as a cleartext (unencrypted)
string.
The clause must be given if password changes for the account are required to specify the current password, as verification that the user attempting to make the change actually knows the current password.
The clause is optional if password changes for the account may but need not specify the current password.
The statement fails if the clause is given but does not match the current password, even if the clause is optional.
REPLACE can be specified only when
changing the account password for the current user.
For more information about password verification by specifying the current password, see Section 6.2.15, “Password Management”.
The RETAIN CURRENT PASSWORD clause implements
dual-password capability and is available as of MySQL 8.0.14. If
given:
RETAIN CURRENT PASSWORD retains an
account current password as its secondary password,
replacing any existing secondary password. The new password
becomes the primary password, but clients can use the
account to connect to the server using either the primary or
secondary password. (Exception: If the new password
specified by the SET PASSWORD
statement is empty, the secondary password becomes empty as
well, even if RETAIN CURRENT PASSWORD is
given.)
If you specify RETAIN CURRENT PASSWORD
for an account that has an empty primary password, the
statement fails.
If an account has a secondary password and you change its
primary password without specifying RETAIN CURRENT
PASSWORD, the secondary password remains
unchanged.
For more information about use of dual passwords, see Section 6.2.15, “Password Management”.
SET PASSWORD permits these
auth_option syntaxes:
=
'
auth_string'
Assigns the account the given literal password.
TO RANDOM
Assigns the account a password randomly generated by MySQL. The statement also returns the cleartext password in a result set to make it available to the user or application executing the statement.
For details about the result set and characteristics of randomly generated passwords, see Random Password Generation.
Random password generation is available as of MySQL 8.0.18.
Under some circumstances, SET
PASSWORD may be recorded in server logs or on the
client side in a history file such as
~/.mysql_history, which means that
cleartext passwords may be read by anyone having read access
to that information. For information about the conditions
under which this occurs for the server logs and how to control
it, see Section 6.1.2.3, “Passwords and Logging”. For similar
information about client-side logging, see
Section 4.5.1.3, “mysql Client Logging”.
SET PASSWORD can be used with or
without a FOR clause that explicitly names a
user account:
With a FOR
clause, the
statement sets the password for the named account, which
must exist:
user
SET PASSWORD FOR 'jeffrey'@'localhost' = 'auth_string';
With no FOR
clause, the
statement sets the password for the current user:
user
SET PASSWORD = 'auth_string';
Any client who connects to the server using a nonanonymous
account can change the password for that account. (In
particular, you can change your own password.) To see which
account the server authenticated you as, invoke the
CURRENT_USER() function:
SELECT CURRENT_USER();
If a FOR
clause is given, the account name uses the format described in
Section 6.2.4, “Specifying Account Names”. For example:
user
SET PASSWORD FOR 'bob'@'%.example.org' = 'auth_string';
The host name part of the account name, if omitted, defaults to
'%'.
SET PASSWORD interprets the
string as a cleartext string, passes it to the authentication
plugin associated with the account, and stores the result
returned by the plugin in the account row in the
mysql.user system table. (The plugin is given
the opportunity to hash the value into the encryption format it
expects. The plugin may use the value as specified, in which
case no hashing occurs.)
Setting the password for a named account (with a
FOR clause) requires the
UPDATE privilege for the
mysql system schema. Setting the password for
yourself (for a nonanonymous account with no
FOR clause) requires no special privileges.
Statements that modify secondary passwords require these privileges:
The
APPLICATION_PASSWORD_ADMIN
privilege is required to use the RETAIN CURRENT
PASSWORD clause for SET
PASSWORD statements that apply to your own
account. The privilege is required to manipulate your own
secondary password because most users require only one
password.
If an account is to be permitted to manipulate secondary
passwords for all accounts, it should be granted the
CREATE USER privilege rather
than
APPLICATION_PASSWORD_ADMIN.
When the read_only system
variable is enabled, SET PASSWORD
requires the CONNECTION_ADMIN
privilege (or the deprecated
SUPER privilege), in addition to
any other required privileges.
For additional information about setting passwords and authentication plugins, see Section 6.2.14, “Assigning Account Passwords”, and Section 6.2.17, “Pluggable Authentication”.
SET ROLE {
DEFAULT
| NONE
| ALL
| ALL EXCEPT role [, role ] ...
| role [, role ] ...
}
SET ROLE modifies the current
user's effective privileges within the current session by
specifying which of its granted roles are active. Granted roles
include those granted explicitly to the user and those named in
the mandatory_roles system
variable value.
Examples:
SET ROLE DEFAULT; SET ROLE 'role1', 'role2'; SET ROLE ALL; SET ROLE ALL EXCEPT 'role1', 'role2';
Each role name uses the format described in
Section 6.2.5, “Specifying Role Names”. The host name part of the role
name, if omitted, defaults to '%'.
Privileges that the user has been granted directly (rather than through roles) remain unaffected by changes to the active roles.
The statement permits these role specifiers:
DEFAULT: Activate the account default
roles. Default roles are those specified with
SET DEFAULT ROLE.
When a user connects to the server and authenticates
successfully, the server determines which roles to activate
as the default roles. If the
activate_all_roles_on_login
system variable is enabled, the server activates all granted
roles. Otherwise, the server executes
SET ROLE
DEFAULT implicitly. The server activates only
default roles that can be activated. The server writes
warnings to its error log for default roles that cannot be
activated, but the client receives no warnings.
If a user executes
SET ROLE
DEFAULT during a session, an error occurs if any
default role cannot be activated (for example, if it does
not exist or is not granted to the user). In this case, the
current active roles are not changed.
NONE: Set the active roles to
NONE (no active roles).
ALL: Activate all roles granted to the
account.
ALL EXCEPT : Activate
all roles granted to the account except those named. The
named roles need not exist or be granted to the account.
role [,
role ] ...
: Activate
the named roles, which must be granted to the account.
role [,
role ] ...
SET DEFAULT ROLE and
SET ROLE
DEFAULT are different statements:
SET DEFAULT ROLE defines
which account roles to activate by default within account
sessions.
SET ROLE
DEFAULT sets the active roles within the current
session to the current account default roles.
For role usage examples, see Section 6.2.10, “Using Roles”.
MySQL supports creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group. This section describes the SQL statements available for resource group management. For general discussion of the resource group capability, see Section 5.1.15, “Resource Groups”.
ALTER RESOURCE GROUPgroup_name[VCPU [=]vcpu_spec[,vcpu_spec] ...] [THREAD_PRIORITY [=]N] [ENABLE|DISABLE [FORCE]]vcpu_spec: {N|M-N}
ALTER RESOURCE GROUP is used for
resource group management (see
Section 5.1.15, “Resource Groups”). This statement alters
modifiable attributes of an existing resource group. It requires
the RESOURCE_GROUP_ADMIN
privilege.
group_name identifies which resource
group to alter. If the group does not exist, an error occurs.
The attributes for CPU affinity, priority, and whether the group
is enabled can be modified with ALTER
RESOURCE GROUP. These attributes are specified the
same way as described for CREATE RESOURCE
GROUP (see Section 13.7.2.2, “CREATE RESOURCE GROUP Statement”).
Only the attributes specified are altered. Unspecified
attributes retain their current values.
The FORCE modifier is used with
DISABLE. It determines statement behavior if
the resource group has any threads assigned to it:
If FORCE is not given, existing threads
in the group continue to run until they terminate, but new
threads cannot be assigned to the group.
If FORCE is given, existing threads in
the group are moved to their respective default group
(system threads to SYS_default, user
threads to USR_default).
The name and type attributes are set at group creation time and
cannot be modified thereafter with ALTER
RESOURCE GROUP.
Examples:
Alter a group CPU affinity:
ALTER RESOURCE GROUP rg1 VCPU = 0-63;
Alter a group thread priority:
ALTER RESOURCE GROUP rg2 THREAD_PRIORITY = 5;
Disable a group, moving any threads assigned to it to the default groups:
ALTER RESOURCE GROUP rg3 DISABLE FORCE;
Resource group management is local to the server on which it
occurs. ALTER RESOURCE GROUP
statements are not written to the binary log and are not
replicated.
CREATE RESOURCE GROUPgroup_nameTYPE = {SYSTEM|USER} [VCPU [=]vcpu_spec[,vcpu_spec] ...] [THREAD_PRIORITY [=]N] [ENABLE|DISABLE]vcpu_spec: {N|M-N}
CREATE RESOURCE GROUP is used for
resource group management (see
Section 5.1.15, “Resource Groups”). This statement creates a new
resource group and assigns its initial attribute values. It
requires the RESOURCE_GROUP_ADMIN
privilege.
group_name identifies which resource
group to create. If the group already exists, an error occurs.
The TYPE attribute is required. It should be
SYSTEM for a system resource group,
USER for a user resource group. The group
type affects permitted THREAD_PRIORITY
values, as described later.
The VCPU attribute indicates the CPU
affinity; that is, the set of virtual CPUs the group can use:
If VCPU is not given, the resource group
has no CPU affinity and can use all available CPUs.
If VCPU is given, the attribute value is
a list of comma-separated CPU numbers or ranges:
Each number must be an integer in the range from 0 to the number of CPUs − 1. For example, on a system with 64 CPUs, the number can range from 0 to 63.
A range is given in the form
M −
N, where
M is less than or equal to
N and both numbers are in the
CPU range.
If a CPU number is an integer outside the permitted range or is not an integer, an error occurs.
Example VCPU specifiers (these are all
equivalent):
VCPU = 0,1,2,3,9,10 VCPU = 0-3,9-10 VCPU = 9,10,0-3 VCPU = 0,10,1,9,3,2
The THREAD_PRIORITY attribute indicates the
priority for threads assigned to the group:
If THREAD_PRIORITY is not given, the
default priority is 0.
If THREAD_PRIORITY is given, the
attribute value must be in the range from -20 (highest
priority) to 19 (lowest priority). The priority for system
resource groups must be in the range from -20 to 0. The
priority for user resource groups must be in the range from
0 to 19. Use of different ranges for system and user groups
ensures that user threads never have a higher priority than
system threads.
ENABLE and DISABLE specify
that the resource group is initially enabled or disabled. If
neither is specified, the group is enabled by default. A
disabled group cannot have threads assigned to it.
Examples:
Create an enabled user group that has a single CPU and the lowest priority:
CREATE RESOURCE GROUP rg1 TYPE = USER VCPU = 0 THREAD_PRIORITY = 19;
Create a disabled system group that has no CPU affinity (can use all CPUs) and the highest priority:
CREATE RESOURCE GROUP rg2 TYPE = SYSTEM THREAD_PRIORITY = -20 DISABLE;
Resource group management is local to the server on which it
occurs. CREATE RESOURCE GROUP
statements are not written to the binary log and are not
replicated.
DROP RESOURCE GROUP group_name [FORCE]
DROP RESOURCE GROUP is used for
resource group management (see
Section 5.1.15, “Resource Groups”). This statement drops a
resource group. It requires the
RESOURCE_GROUP_ADMIN privilege.
group_name identifies which resource
group to drop. If the group does not exist, an error occurs.
The FORCE modifier determines statement
behavior if the resource group has any threads assigned to it:
If FORCE is not given and any threads are
assigned to the group, an error occurs.
If FORCE is given, existing threads in
the group are moved to their respective default group
(system threads to SYS_default, user
threads to USR_default).
Examples:
Drop a group, failing if the group contains any threads:
DROP RESOURCE GROUP rg1;
Drop a group and move existing threads to the default groups:
DROP RESOURCE GROUP rg2 FORCE;
Resource group management is local to the server on which it
occurs. DROP RESOURCE GROUP
statements are not written to the binary log and are not
replicated.
SET RESOURCE GROUPgroup_name[FORthread_id[,thread_id] ...]
SET RESOURCE GROUP is used for
resource group management (see
Section 5.1.15, “Resource Groups”). This statement assigns
threads to a resource group. It requires the
RESOURCE_GROUP_ADMIN or
RESOURCE_GROUP_USER privilege.
group_name identifies which resource
group to be assigned. Any thread_id
values indicate threads to assign to the group. Thread IDs can
be determined from the Performance Schema
threads table. If the resource
group or any named thread ID does not exist, an error occurs.
With no FOR clause, the statement assigns the
current thread for the session to the resource group.
With a FOR clause that names thread IDs, the
statement assigns those threads to the resource group.
For attempts to assign a system thread to a user resource group or a user thread to a system resource group, a warning occurs.
Examples:
Assign the current session thread to a group:
SET RESOURCE GROUP rg1;
Assign the named threads to a group:
SET RESOURCE GROUP rg2 FOR 14, 78, 4;
Resource group management is local to the server on which it
occurs. SET RESOURCE GROUP
statements are not written to the binary log and are not
replicated.
An alternative to SET RESOURCE
GROUP is the
RESOURCE_GROUP optimizer hint,
which assigns individual statements to a resource group. See
Section 8.9.3, “Optimizer Hints”.
ANALYZE [NO_WRITE_TO_BINLOG | LOCAL]
TABLE tbl_name [, tbl_name] ...
ANALYZE [NO_WRITE_TO_BINLOG | LOCAL]
TABLE tbl_name
UPDATE HISTOGRAM ON col_name [, col_name] ...
[WITH N BUCKETS]
ANALYZE [NO_WRITE_TO_BINLOG | LOCAL]
TABLE tbl_name
DROP HISTOGRAM ON col_name [, col_name] ...
ANALYZE TABLE generates table
statistics:
ANALYZE TABLE without either
HISTOGRAM clause performs a key
distribution analysis and stores the distribution for the
named table or tables. For MyISAM tables,
ANALYZE TABLE for key
distribution analysis is equivalent to using
myisamchk --analyze.
ANALYZE TABLE with the
UPDATE HISTOGRAM clause generates
histogram statistics for the named table columns and stores
them in the data dictionary. Only one table name is
permitted for this syntax.
ANALYZE TABLE with the
DROP HISTOGRAM clause removes histogram
statistics for the named table columns from the data
dictionary. Only one table name is permitted for this
syntax.
This statement requires SELECT
and INSERT privileges for the
table.
ANALYZE TABLE works with
InnoDB, NDB, and
MyISAM tables. It does not work with views.
If the innodb_read_only system
variable is enabled, ANALYZE
TABLE may fail because it cannot update statistics
tables in the data dictionary, which use
InnoDB. For ANALYZE
TABLE operations that update the key distribution,
failure may occur even if the operation updates the table itself
(for example, if it is a MyISAM table). To
obtain the updated distribution statistics, set
information_schema_stats_expiry=0.
ANALYZE TABLE is supported for
partitioned tables, and you can use ALTER TABLE ...
ANALYZE PARTITION to analyze one or more partitions;
for more information, see Section 13.1.9, “ALTER TABLE Statement”, and
Section 23.3.4, “Maintenance of Partitions”.
During the analysis, the table is locked with a read lock for
InnoDB and MyISAM.
ANALYZE TABLE removes the table
from the table definition cache, which requires a flush lock. If
there are long running statements or transactions still using
the table, subsequent statements and transactions must wait for
those operations to finish before the flush lock is released.
Because ANALYZE TABLE itself
typically finishes quickly, it may not be apparent that delayed
transactions or statements involving the same table are due to
the remaining flush lock.
By default, the server writes ANALYZE
TABLE statements to the binary log so that they
replicate to replication slaves. To suppress logging, specify
the optional NO_WRITE_TO_BINLOG keyword or
its alias LOCAL.
ANALYZE TABLE returns a result
set with the columns shown in the following table.
| Column | Value |
|---|---|
Table |
The table name |
Op |
analyze or histogram |
Msg_type |
status, error,
info, note, or
warning |
Msg_text |
An informational message |
ANALYZE TABLE without either
HISTOGRAM clause performs a key
distribution analysis and stores the distribution for the
table or tables. Any existing histogram statistics remain
unaffected.
If the table has not changed since the last key distribution analysis, the table is not analyzed again.
MySQL uses the stored key distribution to decide the order in which tables should be joined for joins on something other than a constant. In addition, key distributions can be used when deciding which indexes to use for a specific table within a query.
To check the stored key distribution cardinality, use the
SHOW INDEX statement or the
INFORMATION_SCHEMA
STATISTICS table. See
Section 13.7.7.22, “SHOW INDEX Statement”, and
Section 25.33, “The INFORMATION_SCHEMA STATISTICS Table”.
For InnoDB tables,
ANALYZE TABLE determines index
cardinality by performing random dives on each of the index
trees and updating index cardinality estimates accordingly.
Because these are only estimates, repeated runs of
ANALYZE TABLE could produce
different numbers. This makes ANALYZE
TABLE fast on InnoDB tables but
not 100% accurate because it does not take all rows into
account.
You can make the
statistics collected by
ANALYZE TABLE more precise and
more stable by enabling
innodb_stats_persistent, as
explained in Section 15.8.10.1, “Configuring Persistent Optimizer Statistics Parameters”. When
innodb_stats_persistent is
enabled, it is important to run ANALYZE
TABLE after major changes to index column data, as
statistics are not recalculated periodically (such as after a
server restart).
If innodb_stats_persistent is
enabled, you can change the number of random dives by
modifying the
innodb_stats_persistent_sample_pages
system variable. If
innodb_stats_persistent is
disabled, modify
innodb_stats_transient_sample_pages
instead.
For more information about key distribution analysis in
InnoDB, see
Section 15.8.10.1, “Configuring Persistent Optimizer Statistics Parameters”, and
Section 15.8.10.3, “Estimating ANALYZE TABLE Complexity for InnoDB Tables”.
MySQL uses index cardinality estimates in join optimization.
If a join is not optimized in the right way, try running
ANALYZE TABLE. In the few cases
that ANALYZE TABLE does not
produce values good enough for your particular tables, you can
use FORCE INDEX with your queries to force
the use of a particular index, or set the
max_seeks_for_key system
variable to ensure that MySQL prefers index lookups over table
scans. See Section B.4.5, “Optimizer-Related Issues”.
ANALYZE TABLE with the
HISTOGRAM clauses enables management of
histogram statistics for table column values. For information
about histogram statistics, see
Section 8.9.6, “Optimizer Statistics”.
These histogram operations are available:
ANALYZE TABLE with an
UPDATE HISTOGRAM clause generates
histogram statistics for the named table columns and
stores them in the data dictionary. Only one table name is
permitted for this syntax.
The optional WITH clauses specifies the number of buckets
for the histogram. The value of
N
BUCKETSN must be an integer in the
range from 1 to 1024. If this clause is omitted, the
number of buckets is 100.
ANALYZE TABLE with a
DROP HISTOGRAM clause removes histogram
statistics for the named table columns from the data
dictionary. Only one table name is permitted for this
syntax.
Stored histogram management statements affect only the named columns. Consider these statements:
ANALYZE TABLE t UPDATE HISTOGRAM ON c1, c2, c3 WITH 10 BUCKETS; ANALYZE TABLE t UPDATE HISTOGRAM ON c1, c3 WITH 10 BUCKETS; ANALYZE TABLE t DROP HISTOGRAM ON c2;
The first statement updates the histograms for columns
c1, c2, and
c3, replacing any existing histograms for
those columns. The second statement updates the histograms for
c1 and c3, leaving the
c2 histogram unaffected. The third
statement removes the histogram for c2,
leaving those for c1 and
c3 unaffected.
Histogram generation is not supported for encrypted tables (to
avoid exposing data in the statistics) or
TEMPORARY tables.
Histogram generation applies to columns of all data types
except geometry types (spatial data) and
JSON.
Histograms can be generated for stored and virtual generated columns.
Histograms cannot be generated for columns that are covered by single-column unique indexes.
Histogram management statements attempt to perform as much of
the requested operation as possible, and report diagnostic
messages for the remainder. For example, if an UPDATE
HISTOGRAM statement names multiple columns, but some
of them do not exist or have an unsupported data type,
histograms are generated for the other columns, and messages
are produced for the invalid columns.
Histograms are affected by these DDL statements:
DROP TABLE removes
histograms for columns in the dropped table.
DROP DATABASE removes
histograms for any table in the dropped database because
the statement drops all tables in the database.
RENAME TABLE does not
remove histograms. Instead, it renames histograms for the
renamed table to be associated with the new table name.
ALTER TABLE statements that
remove or modify a column remove histograms for that
column.
ALTER TABLE
... CONVERT TO CHARACTER SET removes histograms
for character columns because they are affected by the
change of character set. Histograms for noncharacter
columns remain unaffected.
The
histogram_generation_max_mem_size
system variable controls the maximum amount of memory
available for histogram generation. The global and session
values may be set at runtime.
Changing the global
histogram_generation_max_mem_size
value requires privileges sufficient to set global system
variables. Changing the session
histogram_generation_max_mem_size
value requires privileges sufficient to set restricted session
system variables. See
Section 5.1.9.1, “System Variable Privileges”.
If the estimated amount of data to be read into memory for
histogram generation exceeds the limit defined by
histogram_generation_max_mem_size,
MySQL samples the data rather than reading all of it into
memory. Sampling is evenly distributed over the entire table.
MySQL uses SYSTEM sampling, which is a
page-level sampling method.
The sampling-rate value in the
HISTOGRAM column of
INFORMATION_SCHEMA.COLUMN_STATISTICS
table can be queried to determine the fraction of data that
was sampled to create the histogram. The
sampling-rate is a number between 0.0 and
1.0. A value of 1 means that all of the data was read (no
sampling).
The following example demonstrates sampling. To ensure that
the amount of data exceeds the
histogram_generation_max_mem_size
limit for the purpose of the example, the limit is set to a
low value (2000000 bytes) prior to generating histogram
statistics for the birth_date column of the
employees table.
mysql>SET histogram_generation_max_mem_size = 2000000;mysql>USE employees;mysql>ANALYZE TABLE employees UPDATE HISTOGRAM ON birth_date WITH 16 BUCKETS\G*************************** 1. row *************************** Table: employees.employees Op: histogram Msg_type: status Msg_text: Histogram statistics created for column 'birth_date'. mysql>SELECT HISTOGRAM->>'$."sampling-rate"'FROM INFORMATION_SCHEMA.COLUMN_STATISTICSWHERE TABLE_NAME = "employees"AND COLUMN_NAME = "birth_date";+---------------------------------+ | HISTOGRAM->>'$."sampling-rate"' | +---------------------------------+ | 0.0491431208869665 | +---------------------------------+
A sampling-rate value of 0.0491431208869665
means that approximately 4.9% of the data from the
birth_date column was read into memory for
generating histogram statistics.
As of MySQL 8.0.19, the InnoDB storage
engine provides its own sampling implementation for data
stored in InnoDB tables. The default
sampling implementation used by MySQL when storage engines do
not provide their own requires a full table scan, which is
costly for large tables. The InnoDB
sampling implementation improves sampling performance by
avoiding full table scans.
The sampled_pages_read and
sampled_pages_skipped
INNODB_METRICS counters can be
used to monitor sampling of InnoDB data
pages. (For general
INNODB_METRICS counter usage
information, see Section 25.47.22, “The INFORMATION_SCHEMA INNODB_METRICS Table”.)
The following example demonstrates sampling counter usage, which requires enabling the counters prior to generating histogram statistics.
mysql>SET GLOBAL innodb_monitor_enable = 'sampled%';mysql>USE employees;mysql>ANALYZE TABLE employees UPDATE HISTOGRAM ON birth_date WITH 16 BUCKETS\G*************************** 1. row *************************** Table: employees.employees Op: histogram Msg_type: status Msg_text: Histogram statistics created for column 'birth_date'. mysql>USE INFORMATION_SCHEMA;mysql>SELECT NAME, COUNT FROM INNODB_METRICS WHERE NAME LIKE 'sampled%'\G*************************** 1. row *************************** NAME: sampled_pages_read COUNT: 43 *************************** 2. row *************************** NAME: sampled_pages_skipped COUNT: 843
This formula approximates a sampling rate based on the sampling counter data:
sampling rate = sampled_page_read/(sampled_pages_read + sampled_pages_skipped)
A sampling rate based on sampling counter data will be roughly
the same as the sampling-rate value in the
HISTOGRAM column of
INFORMATION_SCHEMA.COLUMN_STATISTICS
table.
For information about memory allocations performed for
histogram generation, monitor the Performance Schema
memory/sql/histograms instrument. See
Section 26.12.18.10, “Memory Summary Tables”.
ANALYZE TABLE clears table statistics from
the
INFORMATION_SCHEMA.INNODB_TABLESTATS
table and sets the STATS_INITIALIZED column
to Uninitialized. Statistics are collected
again the next time the table is accessed.
CHECK TABLEtbl_name[,tbl_name] ... [option] ...option: { FOR UPGRADE | QUICK | FAST | MEDIUM | EXTENDED | CHANGED }
CHECK TABLE checks a table or
tables for errors. CHECK TABLE
can also check views for problems, such as tables that are
referenced in the view definition that no longer exist.
To check a table, you must have some privilege for it.
CHECK TABLE works for
InnoDB,
MyISAM,
ARCHIVE, and
CSV tables.
Before running CHECK TABLE on
InnoDB tables, see
CHECK TABLE Usage Notes for InnoDB Tables.
CHECK TABLE is supported for
partitioned tables, and you can use ALTER TABLE ...
CHECK PARTITION to check one or more partitions; for
more information, see Section 13.1.9, “ALTER TABLE Statement”, and
Section 23.3.4, “Maintenance of Partitions”.
CHECK TABLE ignores virtual
generated columns that are not indexed.
CHECK TABLE returns a result
set with the columns shown in the following table.
| Column | Value |
|---|---|
Table |
The table name |
Op |
Always check |
Msg_type |
status, error,
info, note, or
warning |
Msg_text |
An informational message |
The statement might produce many rows of information for each
checked table. The last row has a Msg_type
value of status and the
Msg_text normally should be
OK. Table is already up to
date means that the storage engine for the table
indicated that there was no need to check the table.
The FOR UPGRADE option checks whether the
named tables are compatible with the current version of MySQL.
With FOR UPGRADE, the server checks each
table to determine whether there have been any incompatible
changes in any of the table's data types or indexes since the
table was created. If not, the check succeeds. Otherwise, if
there is a possible incompatibility, the server runs a full
check on the table (which might take some time).
Incompatibilities might occur because the storage format for a data type has changed or because its sort order has changed. Our aim is to avoid these changes, but occasionally they are necessary to correct problems that would be worse than an incompatibility between releases.
FOR UPGRADE discovers these
incompatibilities:
The indexing order for end-space in
TEXT columns for
InnoDB and MyISAM
tables changed between MySQL 4.1 and 5.0.
The storage method of the new
DECIMAL data type changed
between MySQL 5.0.3 and 5.0.5.
Changes are sometimes made to character sets or collations that require table indexes to be rebuilt. For details about such changes, see Section 2.11.4, “Changes in MySQL 8.0”. For information about rebuilding tables, see Section 2.11.13, “Rebuilding or Repairing Tables or Indexes”.
MySQL 8.0 does not support the 2-digit
YEAR(2) data type permitted
in older versions of MySQL. For tables containing
YEAR(2) columns,
CHECK TABLE recommends
REPAIR TABLE, which
converts 2-digit YEAR(2)
columns to 4-digit YEAR
columns.
Trigger creation time is maintained.
A table is reported as needing a rebuild if it contains
old temporal columns in pre-5.6.4 format
(TIME,
DATETIME, and
TIMESTAMP columns without
support for fractional seconds precision) and the
avoid_temporal_upgrade
system variable is disabled. This helps the MySQL upgrade
procedure detect and upgrade tables containing old
temporal columns. If
avoid_temporal_upgrade is
enabled, FOR UPGRADE ignores the old
temporal columns present in the table; consequently, the
upgrade procedure does not upgrade them.
To check for tables that contain such temporal columns and
need a rebuild, disable
avoid_temporal_upgrade
before executing
CHECK TABLE
... FOR UPGRADE.
Warnings are issued for tables that use nonnative partitioning because nonnative partitioning is removed in MySQL 8.0. See Chapter 23, Partitioning.
The following table shows the other check options that can be given. These options are passed to the storage engine, which may use or ignore them.
| Type | Meaning |
|---|---|
QUICK |
Do not scan the rows to check for incorrect links. Applies to
InnoDB and MyISAM
tables and views. |
FAST |
Check only tables that have not been closed properly. Ignored for
InnoDB; applies only to
MyISAM tables and views. |
CHANGED |
Check only tables that have been changed since the last check or that
have not been closed properly. Ignored for
InnoDB; applies only to
MyISAM tables and views. |
MEDIUM |
Scan rows to verify that deleted links are valid. This also calculates a
key checksum for the rows and verifies this with a
calculated checksum for the keys. Ignored for
InnoDB; applies only to
MyISAM tables and views. |
EXTENDED |
Do a full key lookup for all keys for each row. This ensures that the
table is 100% consistent, but takes a long time. Ignored
for InnoDB; applies only to
MyISAM tables and views. |
You can combine check options, as in the following example that does a quick check on the table to determine whether it was closed properly:
CHECK TABLE test_table FAST QUICK;
If CHECK TABLE finds no
problems with a table that is marked as
“corrupted” or “not closed
properly”, CHECK TABLE
may remove the mark.
If a table is corrupted, the problem is most likely in the indexes and not in the data part. All of the preceding check types check the indexes thoroughly and should thus find most errors.
To check a table that you assume is okay, use no check options
or the QUICK option. The latter should be
used when you are in a hurry and can take the very small risk
that QUICK does not find an error in the
data file. (In most cases, under normal usage, MySQL should
find any error in the data file. If this happens, the table is
marked as “corrupted” and cannot be used until it
is repaired.)
FAST and CHANGED are
mostly intended to be used from a script (for example, to be
executed from cron) to check tables
periodically. In most cases, FAST is to be
preferred over CHANGED. (The only case when
it is not preferred is when you suspect that you have found a
bug in the MyISAM code.)
EXTENDED is to be used only after you have
run a normal check but still get errors from a table when
MySQL tries to update a row or find a row by key. This is very
unlikely if a normal check has succeeded.
Use of CHECK
TABLE ... EXTENDED might influence execution plans
generated by the query optimizer.
Some problems reported by CHECK
TABLE cannot be corrected automatically:
Found row where the auto_increment column has the
value 0.
This means that you have a row in the table where the
AUTO_INCREMENT index column contains
the value 0. (It is possible to create a row where the
AUTO_INCREMENT column is 0 by
explicitly setting the column to 0 with an
UPDATE statement.)
This is not an error in itself, but could cause trouble if
you decide to dump the table and restore it or do an
ALTER TABLE on the table.
In this case, the AUTO_INCREMENT column
changes value according to the rules of
AUTO_INCREMENT columns, which could
cause problems such as a duplicate-key error.
To get rid of the warning, execute an
UPDATE statement to set the
column to some value other than 0.
The following notes apply to
InnoDB tables:
If CHECK TABLE encounters a
corrupt page, the server exits to prevent error
propagation (Bug #10132). If the corruption occurs in a
secondary index but table data is readable, running
CHECK TABLE can still cause
a server exit.
If CHECK TABLE encounters a
corrupted DB_TRX_ID or
DB_ROLL_PTR field in a clustered index,
CHECK TABLE can cause
InnoDB to access an invalid undo log
record, resulting in an
MVCC-related server exit.
If CHECK TABLE encounters
errors in InnoDB tables or indexes, it
reports an error, and usually marks the index and
sometimes marks the table as corrupted, preventing further
use of the index or table. Such errors include an
incorrect number of entries in a secondary index or
incorrect links.
If CHECK TABLE finds an
incorrect number of entries in a secondary index, it
reports an error but does not cause a server exit or
prevent access to the file.
CHECK TABLE surveys the
index page structure, then surveys each key entry. It does
not validate the key pointer to a clustered record or
follow the path for BLOB
pointers.
When an InnoDB table is stored in its
own
.ibd
file, the first 3
pages of the
.ibd file contain header information
rather than table or index data. The
CHECK TABLE statement does
not detect inconsistencies that affect only the header
data. To verify the entire contents of an
InnoDB .ibd file,
use the innochecksum command.
When running CHECK TABLE on
large InnoDB tables, other threads may
be blocked during CHECK
TABLE execution. To avoid timeouts, the
semaphore wait threshold (600 seconds) is extended by 2
hours (7200 seconds) for CHECK
TABLE operations. If InnoDB
detects semaphore waits of 240 seconds or more, it starts
printing InnoDB monitor output to the
error log. If a lock request extends beyond the semaphore
wait threshold, InnoDB aborts the
process. To avoid the possibility of a semaphore wait
timeout entirely, run
CHECK TABLE
QUICK instead of CHECK
TABLE.
CHECK TABLE functionality
for InnoDB SPATIAL
indexes includes an R-tree validity check and a check to
ensure that the R-tree row count matches the clustered
index.
CHECK TABLE supports
secondary indexes on virtual generated columns, which are
supported by InnoDB.
As of MySQL 8.0.14, InnoDB supports
parallel clustered index reads, which can improve
CHECK TABLE performance.
InnoDB reads the clustered index twice
during a CHECK TABLE
operation. The second read can be performed in parallel.
The
innodb_parallel_read_threads
session variable must be set to a value greater than 1 for
parallel clustered index reads to occur. The default value
is 4. The actual number of threads used to perform a
parallel clustered index read is determined by the
innodb_parallel_read_threads
setting or the number of index subtrees to scan, whichever
is smaller.
The following notes apply to
MyISAM tables:
CHECK TABLE updates key
statistics for MyISAM tables.
If CHECK TABLE output does
not return OK or Table is
already up to date, you should normally run a
repair of the table. See
Section 7.6, “MyISAM Table Maintenance and Crash Recovery”.
If none of the CHECK TABLE
options QUICK,
MEDIUM, or EXTENDED
are specified, the default check type for dynamic-format
MyISAM tables is
MEDIUM. This has the same result as
running myisamchk --medium-check
tbl_name on the
table. The default check type also is
MEDIUM for static-format
MyISAM tables, unless
CHANGED or FAST is
specified. In that case, the default is
QUICK. The row scan is skipped for
CHANGED and FAST
because the rows are very seldom corrupted.
CHECKSUM TABLEtbl_name[,tbl_name] ... [QUICK | EXTENDED]
CHECKSUM TABLE reports a
checksum for the contents
of a table. You can use this statement to verify that the
contents are the same before and after a backup, rollback, or
other operation that is intended to put the data back to a known
state.
This statement requires the
SELECT privilege for the table.
This statement is not supported for views. If you run
CHECKSUM TABLE against a view,
the Checksum value is always
NULL, and a warning is returned.
For a nonexistent table, CHECKSUM
TABLE returns NULL and generates a
warning.
During the checksum operation, the table is locked with a read
lock for InnoDB and
MyISAM.
By default, the entire table is read row by row and the
checksum is calculated. For large tables, this could take a
long time, thus you would only perform this operation
occasionally. This row-by-row calculation is what you get with
the EXTENDED clause, with
InnoDB and all other storage engines other
than MyISAM, and with
MyISAM tables not created with the
CHECKSUM=1 clause.
For MyISAM tables created with the
CHECKSUM=1 clause,
CHECKSUM TABLE or
CHECKSUM TABLE
... QUICK returns the “live” table
checksum that can be returned very fast. If the table does not
meet all these conditions, the QUICK method
returns NULL. The QUICK
method is not supported with InnoDB tables.
See Section 13.1.20, “CREATE TABLE Statement” for the syntax of the
CHECKSUM clause.
The checksum value depends on the table row format. If the row
format changes, the checksum also changes. For example, the
storage format for temporal types such as
TIME,
DATETIME, and
TIMESTAMP changed in MySQL 5.6
prior to MySQL 5.6.5, so if a 5.5 table is upgraded to MySQL
5.6, the checksum value may change.
If the checksums for two tables are different, then it is
almost certain that the tables are different in some way.
However, because the hashing function used by
CHECKSUM TABLE is not
guaranteed to be collision-free, there is a slight chance
that two tables which are not identical can produce the same
checksum.
OPTIMIZE [NO_WRITE_TO_BINLOG | LOCAL]
TABLE tbl_name [, tbl_name] ...
OPTIMIZE TABLE reorganizes the
physical storage of table data and associated index data, to
reduce storage space and improve I/O efficiency when accessing
the table. The exact changes made to each table depend on the
storage engine used
by that table.
Use OPTIMIZE TABLE in these
cases, depending on the type of table:
After doing substantial insert, update, or delete operations
on an InnoDB table that has its own
.ibd file because it
was created with the
innodb_file_per_table
option enabled. The table and indexes are reorganized, and
disk space can be reclaimed for use by the operating system.
After doing substantial insert, update, or delete operations
on columns that are part of a FULLTEXT
index in an InnoDB table. Set the
configuration option
innodb_optimize_fulltext_only=1
first. To keep the index maintenance period to a reasonable
time, set the
innodb_ft_num_word_optimize
option to specify how many words to update in the search
index, and run a sequence of OPTIMIZE
TABLE statements until the search index is fully
updated.
After deleting a large part of a MyISAM
or ARCHIVE table, or making many changes
to a MyISAM or ARCHIVE
table with variable-length rows (tables that have
VARCHAR,
VARBINARY,
BLOB, or
TEXT columns). Deleted rows
are maintained in a linked list and subsequent
INSERT operations reuse old
row positions. You can use OPTIMIZE
TABLE to reclaim the unused space and to
defragment the data file. After extensive changes to a
table, this statement may also improve performance of
statements that use the table, sometimes significantly.
This statement requires SELECT
and INSERT privileges for the
table.
OPTIMIZE TABLE works for
InnoDB,
MyISAM, and
ARCHIVE tables.
OPTIMIZE TABLE is also supported
for dynamic columns of in-memory
NDB tables. It does not work for
fixed-width columns of in-memory tables, nor does it work for
Disk Data tables. The performance of OPTIMIZE
on NDB Cluster tables can be tuned using
--ndb-optimization-delay, which
controls the length of time to wait between processing batches
of rows by OPTIMIZE TABLE. For
more information, see
Section 22.1.7.11, “Previous NDB Cluster Issues Resolved in NDB Cluster 8.0”.
For NDB Cluster tables, OPTIMIZE
TABLE can be interrupted by (for example) killing the
SQL thread performing the OPTIMIZE operation.
By default, OPTIMIZE TABLE does
not work for tables created using any other
storage engine and returns a result indicating this lack of
support. You can make OPTIMIZE
TABLE work for other storage engines by starting
mysqld with the
--skip-new option. In this case,
OPTIMIZE TABLE is just mapped to
ALTER TABLE.
This statement does not work with views.
OPTIMIZE TABLE is supported for
partitioned tables. For information about using this statement
with partitioned tables and table partitions, see
Section 23.3.4, “Maintenance of Partitions”.
By default, the server writes OPTIMIZE
TABLE statements to the binary log so that they
replicate to replication slaves. To suppress logging, specify
the optional NO_WRITE_TO_BINLOG keyword or
its alias LOCAL.
OPTIMIZE TABLE returns a result
set with the columns shown in the following table.
| Column | Value |
|---|---|
Table |
The table name |
Op |
Always optimize |
Msg_type |
status, error,
info, note, or
warning |
Msg_text |
An informational message |
OPTIMIZE TABLE table catches
and throws any errors that occur while copying table
statistics from the old file to the newly created file. For
example. if the user ID of the owner of the
.MYD or .MYI file is
different from the user ID of the mysqld
process, OPTIMIZE TABLE
generates a "cannot change ownership of the file" error unless
mysqld is started by the
root user.
For InnoDB tables,
OPTIMIZE TABLE is mapped to
ALTER TABLE ...
FORCE, which rebuilds the table to update index
statistics and free unused space in the clustered index. This
is displayed in the output of OPTIMIZE
TABLE when you run it on an
InnoDB table, as shown here:
mysql> OPTIMIZE TABLE foo; +----------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +----------+----------+----------+-------------------------------------------------------------------+ | test.foo | optimize | note | Table does not support optimize, doing recreate + analyze instead | | test.foo | optimize | status | OK | +----------+----------+----------+-------------------------------------------------------------------+
OPTIMIZE TABLE uses
online DDL for
regular and partitioned InnoDB tables,
which reduces downtime for concurrent DML operations. The
table rebuild triggered by OPTIMIZE
TABLE and performed under the cover by
ALTER TABLE ...
FORCE is completed in place. An exclusive table lock
is only taken briefly during the prepare phase and the commit
phase of the operation. During the prepare phase, metadata is
updated and an intermediate table is created. During the
commit phase, table metadata changes are committed.
OPTIMIZE TABLE rebuilds the
table using the table copy method under the following
conditions:
When the old_alter_table
system variable is enabled.
When the server is started with the
--skip-new option.
OPTIMIZE TABLE using
online DDL is not
supported for InnoDB tables that contain
FULLTEXT indexes. The table copy method is
used instead.
InnoDB stores data using a page-allocation
method and does not suffer from fragmentation in the same way
that legacy storage engines (such as
MyISAM) will. When considering whether or
not to run optimize, consider the workload of transactions
that your server will process:
Some level of fragmentation is expected.
InnoDB only fills
pages 93% full, to
leave room for updates without having to split pages.
Delete operations might leave gaps that leave pages less filled than desired, which could make it worthwhile to optimize the table.
Updates to rows usually rewrite the data within the same page, depending on the data type and row format, when sufficient space is available. See Section 15.9.1.5, “How Compression Works for InnoDB Tables” and Section 15.10, “InnoDB Row Formats”.
High-concurrency workloads might leave gaps in indexes
over time, as InnoDB retains multiple
versions of the same data due through its
MVCC mechanism. See
Section 15.3, “InnoDB Multi-Versioning”.
For MyISAM tables,
OPTIMIZE TABLE works as
follows:
If the table has deleted or split rows, repair the table.
If the index pages are not sorted, sort them.
If the table's statistics are not up to date (and the repair could not be accomplished by sorting the index), update them.
OPTIMIZE TABLE is performed
online for regular and partitioned InnoDB
tables. Otherwise, MySQL locks
the table during the time OPTIMIZE
TABLE is running.
OPTIMIZE TABLE does not sort
R-tree indexes, such as spatial indexes on
POINT columns. (Bug #23578)
REPAIR [NO_WRITE_TO_BINLOG | LOCAL]
TABLE tbl_name [, tbl_name] ...
[QUICK] [EXTENDED] [USE_FRM]
REPAIR TABLE repairs a possibly
corrupted table, for certain storage engines only.
This statement requires SELECT
and INSERT privileges for the
table.
Although normally you should never have to run
REPAIR TABLE, if disaster
strikes, this statement is very likely to get back all your data
from a MyISAM table. If your tables become
corrupted often, try to find the reason for it, to eliminate the
need to use REPAIR TABLE. See
Section B.4.3.3, “What to Do If MySQL Keeps Crashing”, and
Section 16.2.4, “MyISAM Table Problems”.
REPAIR TABLE checks the table to
see whether an upgrade is required. If so, it performs the
upgrade, following the same rules as
CHECK TABLE ... FOR
UPGRADE. See Section 13.7.3.2, “CHECK TABLE Statement”, for more
information.
Make a backup of a table before performing a table repair operation; under some circumstances the operation might cause data loss. Possible causes include but are not limited to file system errors. See Chapter 7, Backup and Recovery.
If the server crashes during a REPAIR
TABLE operation, it is essential after
restarting it that you immediately execute another
REPAIR TABLE statement for
the table before performing any other operations on it. In
the worst case, you might have a new clean index file
without information about the data file, and then the next
operation you perform could overwrite the data file. This
is an unlikely but possible scenario that underscores the
value of making a backup first.
In the event that a table on the master becomes corrupted
and you run REPAIR TABLE on
it, any resulting changes to the original table are
not propagated to slaves.
REPAIR TABLE works for
MyISAM,
ARCHIVE, and
CSV tables. For
MyISAM tables, it has the same
effect as myisamchk --recover
tbl_name by default. This
statement does not work with views.
REPAIR TABLE is supported for
partitioned tables. However, the USE_FRM
option cannot be used with this statement on a partitioned
table.
You can use ALTER TABLE ... REPAIR
PARTITION to repair one or more partitions; for more
information, see Section 13.1.9, “ALTER TABLE Statement”, and
Section 23.3.4, “Maintenance of Partitions”.
NO_WRITE_TO_BINLOG or
LOCAL
By default, the server writes REPAIR
TABLE statements to the binary log so that they
replicate to replication slaves. To suppress logging,
specify the optional NO_WRITE_TO_BINLOG
keyword or its alias LOCAL.
QUICK
If you use the QUICK option,
REPAIR TABLE tries to
repair only the index file, and not the data file. This
type of repair is like that done by myisamchk
--recover --quick.
EXTENDED
If you use the EXTENDED option, MySQL
creates the index row by row instead of creating one index
at a time with sorting. This type of repair is like that
done by myisamchk --safe-recover.
USE_FRM
The USE_FRM option is available for use
if the .MYI index file is missing or
if its header is corrupted. This option tells MySQL not to
trust the information in the .MYI
file header and to re-create it using information from the
data dictionary. This kind of repair cannot be done with
myisamchk.
Use the USE_FRM option
only if you cannot use regular
REPAIR modes. Telling the server to
ignore the .MYI file makes
important table metadata stored in the
.MYI unavailable to the repair
process, which can have deleterious consequences:
The current AUTO_INCREMENT value
is lost.
The link to deleted records in the table is lost, which means that free space for deleted records will remain unoccupied thereafter.
The .MYI header indicates
whether the table is compressed. If the server
ignores this information, it cannot tell that a
table is compressed and repair can cause change or
loss of table contents. This means that
USE_FRM should not be used with
compressed tables. That should not be necessary,
anyway: Compressed tables are read only, so they
should not become corrupt.
If you use USE_FRM for a table that
was created by a different version of the MySQL server
than the one you are currently running,
REPAIR TABLE does not
attempt to repair the table. In this case, the result
set returned by REPAIR
TABLE contains a line with a
Msg_type value of
error and a
Msg_text value of Failed
repairing incompatible .FRM file.
If USE_FRM is used,
REPAIR TABLE does not
check the table to see whether an upgrade is required.
REPAIR TABLE returns a result
set with the columns shown in the following table.
| Column | Value |
|---|---|
Table |
The table name |
Op |
Always repair |
Msg_type |
status, error,
info, note, or
warning |
Msg_text |
An informational message |
The REPAIR TABLE statement
might produce many rows of information for each repaired
table. The last row has a Msg_type value of
status and Msg_test
normally should be OK. For a
MyISAM table, if you do not get
OK, you should try repairing it with
myisamchk --safe-recover.
(REPAIR TABLE does not
implement all the options of myisamchk.
With myisamchk --safe-recover, you can also
use options that REPAIR TABLE
does not support, such as
--max-record-length.)
REPAIR TABLE table catches and
throws any errors that occur while copying table statistics
from the old corrupted file to the newly created file. For
example. if the user ID of the owner of the
.MYD or .MYI file is
different from the user ID of the mysqld
process, REPAIR TABLE generates
a "cannot change ownership of the file" error unless
mysqld is started by the
root user.
REPAIR TABLE upgrades a table
if it contains old temporal columns in pre-5.6.4 format
(TIME,
DATETIME, and
TIMESTAMP columns without
support for fractional seconds precision) and the
avoid_temporal_upgrade system
variable is disabled. If
avoid_temporal_upgrade is
enabled, REPAIR TABLE ignores
the old temporal columns present in the table and does not
upgrade them.
To upgrade tables that contain such temporal columns, disable
avoid_temporal_upgrade before
executing REPAIR TABLE.
You may be able to increase REPAIR
TABLE performance by setting certain system
variables. See Section 8.6.3, “Optimizing REPAIR TABLE Statements”.
CREATE [AGGREGATE] FUNCTIONfunction_nameRETURNS {STRING|INTEGER|REAL|DECIMAL} SONAMEshared_library_name
A user-defined function (UDF) is a way to extend MySQL with a
new function that works like a native (built-in) MySQL function
such as ABS() or
CONCAT().
function_name is the name that should
be used in SQL statements to invoke the function. The
RETURNS clause indicates the type of the
function's return value. DECIMAL
is a legal value after RETURNS, but currently
DECIMAL functions return string
values and should be written like STRING
functions.
shared_library_name is the base name
of the shared library file that contains the code that
implements the function. The file must be located in the plugin
directory. This directory is given by the value of the
plugin_dir system variable. For
more information, see Section 5.7.1, “Installing and Uninstalling User-Defined Functions”.
To create a function, you must have the
INSERT privilege for the
mysql system schema. This is necessary
because CREATE FUNCTION adds a
row to the mysql.func system table that
records the function's name, type, and shared library name.
UDFs registered using CREATE
FUNCTION are listed in the Performance Schema
user_defined_functions table; see
Section 26.12.19.7, “The user_defined_functions Table”.
An active function is one that has been loaded with
CREATE FUNCTION and not removed
with DROP FUNCTION. All active
functions are reloaded each time the server starts, unless you
start mysqld with the
--skip-grant-tables option. In
this case, UDF initialization is skipped and UDFs are
unavailable.
For instructions on writing user-defined functions, see Section 29.4.2, “Adding a User-Defined Function”. For the UDF mechanism to work, functions must be written in C or C++ (or another language that can use C calling conventions), your operating system must support dynamic loading and you must have compiled mysqld dynamically (not statically).
An AGGREGATE function works exactly like a
native MySQL aggregate (summary) function such as
SUM or
COUNT().
To upgrade the shared library associated with a UDF, issue a
DROP FUNCTION statement,
upgrade the shared library, and then issue a
CREATE FUNCTION statement. If
you upgrade the shared library first and then use
DROP FUNCTION, the server may
crash.
DROP FUNCTION function_name
This statement drops the user-defined function (UDF) named
function_name.
To drop a function, you must have the
DELETE privilege for the
mysql system schema. This is because
DROP FUNCTION removes a row from
the mysql.func system table that records the
function's name, type, and shared library name.
To upgrade the shared library associated with a UDF, issue a
DROP FUNCTION statement,
upgrade the shared library, and then issue a
CREATE FUNCTION statement. If
you upgrade the shared library first and then use
DROP FUNCTION, the server may
crash.
DROP FUNCTION is also used to
drop stored functions (see Section 13.1.29, “DROP PROCEDURE and DROP FUNCTION Statements”).
INSTALL COMPONENTcomponent_name[,component_name] ...
This statement installs one or more server components, which
become active immediately. A component provides services that
are available to the server and other components; see
Section 5.5, “MySQL Server Components”.
INSTALL COMPONENT requires the
INSERT privilege for the
mysql.component system table.
Example:
INSTALL COMPONENT 'file://component1', 'file://component2';
Component names are URNs that begin with
file:// and indicate the base name of the
file that implements the component, located in the directory
named by the plugin_dir system
variable. Component names do not include any platform-dependent
file name suffix such as .so or
.dll. (These naming details are subject to
change because component name interpretation is itself performed
by a service and the component infrastructure makes it possible
to replace the default service implementation with alternative
implementations.)
If any error occurs, the statement fails and has no effect. For example, this happens if a component name is erroneous, a named component does not exist or is already installed, or component initialization fails.
A loader service handles component loading, which includes
adding installed components to the
mysql.component system table that serves as a
registry. For subsequent server restarts, any components listed
in mysql.component are loaded by the loader
service during the startup sequence. This occurs even if the
server is started with the
--skip-grant-tables option.
If a component depends on services not present in the registry and you attempt to install the component without also installing the component or components that provide the services on which it depends, an error occurs:
ERROR 3527 (HY000): Cannot satisfy dependency for service 'component_a' required by component 'component_b'.
To avoid this problem, either install all components in the same statement, or install the dependent component after installing any components on which it depends.
INSTALL PLUGINplugin_nameSONAME 'shared_library_name'
This statement installs a server plugin. It requires the
INSERT privilege for the
mysql.plugin system table.
plugin_name is the name of the plugin
as defined in the plugin descriptor structure contained in the
library file (see Section 29.2.4.2, “Plugin Data Structures”).
Plugin names are not case-sensitive. For maximal compatibility,
plugin names should be limited to ASCII letters, digits, and
underscore because they are used in C source files, shell
command lines, M4 and Bourne shell scripts, and SQL
environments.
shared_library_name is the name of
the shared library that contains the plugin code. The name
includes the file name extension (for example,
libmyplugin.so,
libmyplugin.dll, or
libmyplugin.dylib).
The shared library must be located in the plugin directory (the
directory named by the
plugin_dir system variable).
The library must be in the plugin directory itself, not in a
subdirectory. By default,
plugin_dir is the
plugin directory under the directory named
by the pkglibdir configuration variable, but
it can be changed by setting the value of
plugin_dir at server startup.
For example, set its value in a my.cnf
file:
[mysqld]
plugin_dir=/path/to/plugin/directory
If the value of plugin_dir is a
relative path name, it is taken to be relative to the MySQL base
directory (the value of the
basedir system variable).
INSTALL PLUGIN loads and
initializes the plugin code to make the plugin available for
use. A plugin is initialized by executing its initialization
function, which handles any setup that the plugin must perform
before it can be used. When the server shuts down, it executes
the deinitialization function for each plugin that is loaded so
that the plugin has a chance to perform any final cleanup.
INSTALL PLUGIN also registers the
plugin by adding a line that indicates the plugin name and
library file name to the mysql.plugin system
table. At server startup, the server loads and initializes any
plugin that is listed in mysql.plugin. This
means that a plugin is installed with
INSTALL PLUGIN only once, not
every time the server starts. Plugin loading at startup does not
occur if the server is started with the
--skip-grant-tables option.
A plugin library can contain multiple plugins. For each of them
to be installed, use a separate INSTALL
PLUGIN statement. Each statement names a different
plugin, but all of them specify the same library name.
INSTALL PLUGIN causes the server
to read option (my.cnf) files just as
during server startup. This enables the plugin to pick up any
relevant options from those files. It is possible to add plugin
options to an option file even before loading a plugin (if the
loose prefix is used). It is also possible to
uninstall a plugin, edit my.cnf, and
install the plugin again. Restarting the plugin this way enables
it to the new option values without a server restart.
For options that control individual plugin loading at server
startup, see Section 5.6.1, “Installing and Uninstalling Plugins”. If you need to
load plugins for a single server startup when the
--skip-grant-tables option is
given (which tells the server not to read system tables), use
the --plugin-load option. See
Section 5.1.7, “Server Command Options”.
To remove a plugin, use the UNINSTALL
PLUGIN statement.
For additional information about plugin loading, see Section 5.6.1, “Installing and Uninstalling Plugins”.
To see what plugins are installed, use the
SHOW PLUGINS statement or query
the INFORMATION_SCHEMA the
PLUGINS table.
If you recompile a plugin library and need to reinstall it, you can use either of the following methods:
Use UNINSTALL PLUGIN to
uninstall all plugins in the library, install the new plugin
library file in the plugin directory, and then use
INSTALL PLUGIN to install all
plugins in the library. This procedure has the advantage
that it can be used without stopping the server. However, if
the plugin library contains many plugins, you must issue
many INSTALL PLUGIN and
UNINSTALL PLUGIN statements.
Stop the server, install the new plugin library file in the plugin directory, and restart the server.
UNINSTALL COMPONENTcomponent_name[,component_name] ...
This statement deactivates and uninstalls one or more server
components. A component provides services that are available to
the server and other components; see
Section 5.5, “MySQL Server Components”.
UNINSTALL COMPONENT is the
complement of INSTALL COMPONENT.
It requires the DELETE privilege
for the mysql.component system table.
Example:
UNINSTALL COMPONENT 'file://component1', 'file://component2';
For information about component naming, see Section 13.7.4.3, “INSTALL COMPONENT Statement”.
If any error occurs, the statement fails and has no effect. For example, this happens if a component name is erroneous, a named component is not installed, or cannot be uninstalled because other installed components depend on it.
A loader service handles component unloading, which includes
removing uninstalled components from the
mysql.component system table that serves as a
registry. As a result, unloaded components are not loaded during
the startup sequence for subsequent server restarts.
UNINSTALL PLUGIN plugin_name
This statement removes an installed server plugin. It requires
the DELETE privilege for the
mysql.plugin system table.
UNINSTALL PLUGIN is the
complement of INSTALL PLUGIN.
plugin_name must be the name of some
plugin that is listed in the mysql.plugin
table. The server executes the plugin's deinitialization
function and removes the row for the plugin from the
mysql.plugin system table, so that subsequent
server restarts will not load and initialize the plugin.
UNINSTALL PLUGIN does not remove
the plugin's shared library file.
You cannot uninstall a plugin if any table that uses it is open.
Plugin removal has implications for the use of associated
tables. For example, if a full-text parser plugin is associated
with a FULLTEXT index on the table,
uninstalling the plugin makes the table unusable. Any attempt to
access the table results in an error. The table cannot even be
opened, so you cannot drop an index for which the plugin is
used. This means that uninstalling a plugin is something to do
with care unless you do not care about the table contents. If
you are uninstalling a plugin with no intention of reinstalling
it later and you care about the table contents, you should dump
the table with mysqldump and remove the
WITH PARSER clause from the dumped
CREATE TABLE statement so that
you can reload the table later. If you do not care about the
table, DROP TABLE can be used
even if any plugins associated with the table are missing.
For additional information about plugin loading, see Section 5.6.1, “Installing and Uninstalling Plugins”.
CLONEclone_actionclone_action: { LOCAL DATA DIRECTORY [=] 'clone_dir'; | INSTANCE FROM 'user'@'host':portIDENTIFIED BY 'password' [DATA DIRECTORY [=] 'clone_dir'] [REQUIRE [NO] SSL] }
The CLONE statement is used to
clone data locally or from a remote MySQL server instance. To use
CLONE syntax, the clone plugin must
be installed. See Section 5.6.7, “The Clone Plugin”.
CLONE LOCAL DATA
DIRECTORY syntax clones data from the local MySQL data
directory to a directory on the same server or node where the
MySQL server instance runs. The 'clone_dir'
directory is the full path of the local directory that data is
cloned to. An absolute path is required. The specified directory
must not exist, but the specified path must be an existent path.
The MySQL server requires the necessary write access to create the
specified directory. For more information, see
Section 5.6.7.2, “Cloning Data Locally”.
CLONE INSTANCE
syntax clones data from a remote MySQL server instance (the donor)
and transfers it to the MySQL instance where the cloning operation
was initiated (the recipient).
is the
clone user on the donor MySQL server instance.
user
is the
hosthostname address of the donor
MySQL server instance. Internet Protocol version 6 (IPv6)
address format is not supported. An alias to the IPv6 address
can be used instead. An IPv4 address can be used as is.
is the
portport number of the donor
MySQL server instance. (The X Protocol port specified by
mysqlx_port is not supported.
Connecting to the donor MySQL server instance through MySQL Router
is also not supported.)
IDENTIFIED BY
' specifies the
password of the clone user on the donor MySQL server instance.
password'
DATA DIRECTORY [=]
' is an
optional clause used to specify a directory on the recipient
for the data you are cloning. Use this option if you do not
want to remove existing data in the recipient data directory.
An absolute path is required, and the directory must not
exist. The MySQL server must have the necessary write access
to create the directory.
clone_dir'
When the optional DATA DIRECTORY [=]
' clause is not
used, a cloning operation removes existing data in the
recipient data directory, replaces it with the cloned data,
and automatically restarts the server afterward.
clone_dir'
[REQUIRE [NO] SSL] explicitly specifies
whether an encrypted connection is to be used or not when
transferring cloned data over the network. An error is
returned if the explicit specification cannot be satisfied. If
an SSL clause is not specified, clone attempts to establish an
encrypted connection by default, falling back to an insecure
connection if the secure connection attempt fails. A secure
connection is required when cloning encrypted data regardless
of whether this clause is specified. For more information, see
Configuring an Encrypted Connection for Cloning.
For additional information about cloning data from a remote MySQL server instance, see Section 5.6.7.3, “Cloning Remote Data”.
The SET
statement has several forms. Descriptions for those forms that are
not associated with a specific server capability appear in
subsections of this section:
SET
enables you to
assign values to variables that affect the operation of the
server or clients. See Section 13.7.6.1, “SET Syntax for Variable Assignment”.
var_name =
value
SET CHARACTER SET and
SET NAMES assign values to
character set and collation variables associated with the
current connection to the server. See
Section 13.7.6.2, “SET CHARACTER SET Statement”, and
Section 13.7.6.3, “SET NAMES Statement”.
Descriptions for the other forms appear elsewhere, grouped with other statements related to the capability they help implement:
SET DEFAULT ROLE and
SET ROLE set the default role
and current role for user accounts. See
Section 13.7.1.9, “SET DEFAULT ROLE Statement”, and
Section 13.7.1.11, “SET ROLE Statement”.
SET PASSWORD assigns account
passwords. See Section 13.7.1.10, “SET PASSWORD Statement”.
SET RESOURCE GROUP assigns threads to a
resource group. See Section 13.7.2.4, “SET RESOURCE GROUP Statement”.
SET
TRANSACTION ISOLATION LEVEL sets the isolation level
for transaction processing. See
Section 13.3.7, “SET TRANSACTION Statement”.
SETvariable=expr[,variable=expr] ...variable: {user_var_name|param_name|local_var_name| {GLOBAL | @@GLOBAL.}system_var_name| {PERSIST | @@PERSIST.}system_var_name| {PERSIST_ONLY | @@PERSIST_ONLY.}system_var_name| [SESSION | @@SESSION. | @@]system_var_name}
SET
syntax for variable assignment enables you to assign values to
different types of variables that affect the operation of the
server or clients:
User-defined variables. See Section 9.4, “User-Defined Variables”.
Stored procedure and function parameters, and stored program local variables. See Section 13.6.4, “Variables in Stored Programs”.
System variables. See Section 5.1.8, “Server System Variables”. System variables also can be set at server startup, as described in Section 5.1.9, “Using System Variables”.
A SET
statement that assigns variable values is not written to the
binary log, so in replication scenarios it affects only the host
on which you execute it. To affect all replication hosts,
execute the statement on each host.
The following sections describe
SET
syntax for setting variables. They use the
=
assignment operator, but the
:=
assignment operator is also permitted for this purpose.
User-defined variables are created locally within a session and exist only within the context of that session; see Section 9.4, “User-Defined Variables”.
A user-defined variable is written as
@ and is
assigned an expression value as follows:
var_name
SET @var_name=expr;
Examples:
SET @name = 43; SET @total_tax = (SELECT SUM(tax) FROM taxable_transactions);
As demonstrated by those statements,
expr can range from simple (a
literal value) to more complex (the value returned by a scalar
subquery).
The Performance Schema
user_variables_by_thread table
contains information about user-defined variables. See
Section 26.12.10, “Performance Schema User-Defined Variable Tables”.
SET
applies to parameters and local variables in the context of
the stored object within which they are defined. The following
procedure uses the increment procedure
parameter and counter local variable:
CREATE PROCEDURE p(increment INT)
BEGIN
DECLARE counter INT DEFAULT 0;
WHILE counter < 10 DO
-- ... do work ...
SET counter = counter + increment;
END WHILE;
END;
The MySQL server maintains system variables that configure its
operation. A system variable can have a global value that
affects server operation as a whole, a session value that
affects the current session, or both. Many system variables
are dynamic and can be changed at runtime using the
SET
statement to affect operation of the current server instance.
SET
can also be used to persist certain system variables to the
mysqld-auto.cnf file in the data
directory, to affect server operation for subsequent startups.
If you change a session system variable, the value remains in effect within your session until you change the variable to a different value or the session ends. The change has no effect on other sessions.
If you change a global system variable, the value is remembered and used to initialize the session value for new sessions until you change the variable to a different value or the server exits. The change is visible to any client that accesses the global value. However, the change affects the corresponding session value only for clients that connect after the change. The global variable change does not affect the session value for any current client sessions (not even the session within which the global value change occurs).
To make a global system variable setting permanent so that it
applies across server restarts, you can persist it to the
mysqld-auto.cnf file in the data
directory. It is also possible to make persistent
configuration changes by manually modifying a
my.cnf option file, but that is more
cumbersome, and an error in a manually entered setting might
not be discovered until much later.
SET
statements that persist system variables are more convenient
and avoid the possibility of malformed settings because
settings with syntax errors do not succeed and do not change
server configuration. For more information about persisting
system variables and the mysqld-auto.cnf
file, see Section 5.1.9.3, “Persisted System Variables”.
Setting or persisting a global system variable value always requires special privileges. Setting a session system variable value normally requires no special privileges and can be done by any user, although there are exceptions. For more information, see Section 5.1.9.1, “System Variable Privileges”.
The following discussion describes the syntax options for setting and persisting system variables:
To assign a value to a global system variable, precede the
variable name by the GLOBAL keyword or
the @@GLOBAL. qualifier:
SET GLOBAL max_connections = 1000; SET @@GLOBAL.max_connections = 1000;
To assign a value to a session system variable, precede
the variable name by the SESSION or
LOCAL keyword, by the
@@SESSION.,
@@LOCAL., or @@
qualifier, or by no keyword or no modifier at all:
SET SESSION sql_mode = 'TRADITIONAL'; SET LOCAL sql_mode = 'TRADITIONAL'; SET @@SESSION.sql_mode = 'TRADITIONAL'; SET @@LOCAL.sql_mode = 'TRADITIONAL'; SET @@sql_mode = 'TRADITIONAL'; SET sql_mode = 'TRADITIONAL';
A client can change its own session variables, but not those of any other client.
To persist a global system variable to the
mysqld-auto.cnf option file in the
data directory, precede the variable name by the
PERSIST keyword or the
@@PERSIST. qualifier:
SET PERSIST max_connections = 1000; SET @@PERSIST.max_connections = 1000;
This
SET
syntax enables you to make configuration changes at
runtime that also persist across server restarts. Like
SET
GLOBAL,
SET
PERSIST sets the global variable runtime value,
but also writes the variable setting to the
mysqld-auto.cnf file (replacing any
existing variable setting if there is one).
To persist a global system variable to the
mysqld-auto.cnf file without setting
the global variable runtime value, precede the variable
name by the PERSIST_ONLY keyword or the
@@PERSIST_ONLY. qualifier:
SET PERSIST_ONLY back_log = 100; SET @@PERSIST_ONLY.back_log = 100;
Like PERSIST,
PERSIST_ONLY writes the variable
setting to mysqld-auto.cnf. However,
unlike PERSIST,
PERSIST_ONLY does not modify the global
variable runtime value. This makes
PERSIST_ONLY suitable for configuring
read-only system variables that can be set only at server
startup.
To set a global system variable value to the compiled-in MySQL
default value or a session system variable to the current
corresponding global value, set the variable to the value
DEFAULT. For example, the following two
statements are identical in setting the session value of
max_join_size to the current
global value:
SET @@SESSION.max_join_size = DEFAULT; SET @@SESSION.max_join_size = @@GLOBAL.max_join_size;
Using
SET to
persist a global system variable to a value of
DEFAULT or to its literal default value
assigns the variable its default value and adds a setting for
the variable to mysqld-auto.cnf. To
remove the variable from the file, use
RESET PERSIST.
Some system variables cannot be persisted or are persist-restricted. See Section 5.1.9.4, “Nonpersistible and Persist-Restricted System Variables”.
A system variable implemented by a plugin can be persisted if
the plugin is installed when the
SET
statement is executed. Assignment of the persisted plugin
variable takes effect for subsequent server restarts if the
plugin is still installed. If the plugin is no longer
installed, the plugin variable will not exist when the server
reads the mysqld-auto.cnf file. In this
case, the server writes a warning to the error log and
continues:
currently unknown variable 'var_name'
was read from the persisted config file
To display system variable names and values:
Use the SHOW VARIABLES
statement; see Section 13.7.7.39, “SHOW VARIABLES Statement”.
Several Performance Schema tables provide system variable information. See Section 26.12.14, “Performance Schema System Variable Tables”.
The Performance Schema
variables_info table contains
information showing when and by which user each system
variable was most recently set. See
Section 26.12.14.2, “Performance Schema variables_info Table”.
The Performance Schema
persisted_variables table
provides an SQL interface to the
mysqld-auto.cnf file, enabling its
contents to be inspected at runtime using
SELECT statements. See
Section 26.12.14.1, “Performance Schema persisted_variables Table”.
If any variable assignment in a
SET
statement fails, the entire statement fails and no variables
are changed, nor is the mysqld-auto.cnf
file changed.
SET
produces an error under the circumstances described here. Most
of the examples show
SET
statements that use keyword syntax (for example,
GLOBAL or SESSION), but
the principles are also true for statements that use the
corresponding modifiers (for example,
@@GLOBAL. or
@@SESSION.).
Use of
SET
(any variant) to set a read-only variable:
mysql> SET GLOBAL version = 'abc';
ERROR 1238 (HY000): Variable 'version' is a read only variable
Use of GLOBAL,
PERSIST, or
PERSIST_ONLY to set a variable that has
only a session value:
mysql> SET GLOBAL sql_log_bin = ON;
ERROR 1228 (HY000): Variable 'sql_log_bin' is a SESSION
variable and can't be used with SET GLOBAL
Use of SESSION to set a variable that
has only a global value:
mysql> SET SESSION max_connections = 1000;
ERROR 1229 (HY000): Variable 'max_connections' is a
GLOBAL variable and should be set with SET GLOBAL
Omission of GLOBAL,
PERSIST, or
PERSIST_ONLY to set a variable that has
only a global value:
mysql> SET max_connections = 1000;
ERROR 1229 (HY000): Variable 'max_connections' is a
GLOBAL variable and should be set with SET GLOBAL
Use of PERSIST or
PERSIST_ONLY to set a variable that
cannot be persisted:
mysql>SET PERSIST port = 3307;ERROR 1238 (HY000): Variable 'port' is a read only variable mysql>SET PERSIST_ONLY port = 3307;ERROR 1238 (HY000): Variable 'port' is a non persistent read only variable
The @@GLOBAL.,
@@PERSIST.,
@@PERSIST_ONLY.,
@@SESSION., and @@
modifiers apply only to system variables. An error occurs
for attempts to apply them to user-defined variables,
stored procedure or function parameters, or stored program
local variables.
Not all system variables can be set to
DEFAULT. In such cases, assigning
DEFAULT results in an error.
An error occurs for attempts to assign
DEFAULT to user-defined variables,
stored procedure or function parameters, or stored program
local variables.
A SET
statement can contain multiple variable assignments, separated
by commas. This statement assigns values to a user-defined
variable and a system variable:
SET @x = 1, SESSION sql_mode = '';
If you set multiple system variables in a single statement,
the most recent GLOBAL,
PERSIST, PERSIST_ONLY,
or SESSION keyword in the statement is used
for following assignments that have no keyword specified.
Examples of multiple-variable assignment:
SET GLOBAL sort_buffer_size = 1000000, SESSION sort_buffer_size = 1000000; SET @@GLOBAL.sort_buffer_size = 1000000, @@LOCAL.sort_buffer_size = 1000000; SET GLOBAL max_connections = 1000, sort_buffer_size = 1000000;
The @@GLOBAL.,
@@PERSIST.,
@@PERSIST_ONLY.,
@@SESSION., and @@
modifiers apply only to the immediately following system
variable, not any remaining system variables. This statement
sets the sort_buffer_size
global value to 50000 and the session value to 1000000:
SET @@GLOBAL.sort_buffer_size = 50000, sort_buffer_size = 1000000;
To refer to the value of a system variable in expressions, use
one of the @@-modifiers (except
@@PERSIST. and
@@PERSIST_ONLY., which are not permitted in
expressions). For example, you can retrieve system variable
values in a SELECT statement
like this:
SELECT @@GLOBAL.sql_mode, @@SESSION.sql_mode, @@sql_mode;
A reference to a system variable in an expression as
@@
(with var_name@@ rather than
@@GLOBAL. or
@@SESSION.) returns the session value if
it exists and the global value otherwise. This differs from
SET @@, which always
refers to the session value.
var_name =
expr
SET {CHARACTER SET | CHARSET}
{'charset_name' | DEFAULT}
This statement maps all strings sent between the server and the
current client with the given mapping. SET CHARACTER
SET sets three session system variables:
character_set_client and
character_set_results are set
to the given character set, and
character_set_connection to the
value of
character_set_database. See
Section 10.4, “Connection Character Sets and Collations”.
charset_name may be quoted or
unquoted.
The default character set mapping can be restored by using the
value DEFAULT. The default depends on the
server configuration.
Some character sets cannot be used as the client character set.
Attempting to use them with SET CHARACTER
SET produces an error. See
Impermissible Client Character Sets.
SET NAMES {'charset_name'
[COLLATE 'collation_name'] | DEFAULT}
This statement sets the three session system variables
character_set_client,
character_set_connection, and
character_set_results to the
given character set. Setting
character_set_connection to
charset_name also sets
collation_connection to the
default collation for charset_name. See
Section 10.4, “Connection Character Sets and Collations”.
The optional COLLATE clause may be used to
specify a collation explicitly. If given, the collation must one
of the permitted collations for
charset_name.
charset_name and
collation_name may be quoted or
unquoted.
The default mapping can be restored by using a value of
DEFAULT. The default depends on the server
configuration.
Some character sets cannot be used as the client character set.
Attempting to use them with SET
NAMES produces an error. See
Impermissible Client Character Sets.
SHOW has many forms that provide
information about databases, tables, columns, or status
information about the server. This section describes those
following:
SHOW {BINARY | MASTER} LOGS
SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count]
SHOW CHARACTER SET [like_or_where]
SHOW COLLATION [like_or_where]
SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [like_or_where]
SHOW CREATE DATABASE db_name
SHOW CREATE EVENT event_name
SHOW CREATE FUNCTION func_name
SHOW CREATE PROCEDURE proc_name
SHOW CREATE TABLE tbl_name
SHOW CREATE TRIGGER trigger_name
SHOW CREATE VIEW view_name
SHOW DATABASES [like_or_where]
SHOW ENGINE engine_name {STATUS | MUTEX}
SHOW [STORAGE] ENGINES
SHOW ERRORS [LIMIT [offset,] row_count]
SHOW EVENTS
SHOW FUNCTION CODE func_name
SHOW FUNCTION STATUS [like_or_where]
SHOW GRANTS FOR user
SHOW INDEX FROM tbl_name [FROM db_name]
SHOW MASTER STATUS
SHOW OPEN TABLES [FROM db_name] [like_or_where]
SHOW PLUGINS
SHOW PROCEDURE CODE proc_name
SHOW PROCEDURE STATUS [like_or_where]
SHOW PRIVILEGES
SHOW [FULL] PROCESSLIST
SHOW PROFILE [types] [FOR QUERY n] [OFFSET n] [LIMIT n]
SHOW PROFILES
SHOW RELAYLOG EVENTS [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count]
SHOW SLAVE HOSTS
SHOW SLAVE STATUS [FOR CHANNEL channel]
SHOW [GLOBAL | SESSION] STATUS [like_or_where]
SHOW TABLE STATUS [FROM db_name] [like_or_where]
SHOW [FULL] TABLES [FROM db_name] [like_or_where]
SHOW TRIGGERS [FROM db_name] [like_or_where]
SHOW [GLOBAL | SESSION] VARIABLES [like_or_where]
SHOW WARNINGS [LIMIT [offset,] row_count]
like_or_where: {
LIKE 'pattern'
| WHERE expr
}
If the syntax for a given SHOW
statement includes a LIKE
' part,
pattern'' is a
string that can contain the SQL pattern'% and
_ wildcard characters. The pattern is useful
for restricting statement output to matching values.
Several SHOW statements also accept
a WHERE clause that provides more flexibility
in specifying which rows to display. See
Section 25.50, “Extensions to SHOW Statements”.
Many MySQL APIs (such as PHP) enable you to treat the result
returned from a SHOW statement as
you would a result set from a
SELECT; see
Chapter 28, Connectors and APIs, or your API documentation for
more information. In addition, you can work in SQL with results
from queries on tables in the
INFORMATION_SCHEMA database, which you cannot
easily do with results from SHOW
statements. See Chapter 25, INFORMATION_SCHEMA Tables.
SHOW BINARY LOGS SHOW MASTER LOGS
Lists the binary log files on the server. This statement is used
as part of the procedure described in
Section 13.4.1.1, “PURGE BINARY LOGS Statement”, that shows how to determine
which logs can be purged. SHOW BINARY
LOGS requires the REPLICATION
CLIENT privilege (or the deprecated
SUPER privilege).
Encrypted binary log files have a 512-byte file header that
stores information required for encryption and decryption of the
file. This is included in the file size displayed by
SHOW BINARY LOGS. The
Encrypted column shows whether or not the
binary log file is encrypted. Binary log encryption is active if
binlog_encryption=ON is set for
the server. Existing binary log files are not encrypted or
decrypted if binary log encryption is activated or deactivated
while the server is running.
mysql> SHOW BINARY LOGS;
+---------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000015 | 724935 | Yes |
| binlog.000016 | 733481 | Yes |
+---------------+-----------+-----------+
SHOW MASTER
LOGS is equivalent to SHOW BINARY
LOGS.
SHOW BINLOG EVENTS [IN 'log_name'] [FROMpos] [LIMIT [offset,]row_count]
Shows the events in the binary log. If you do not specify
', the
first binary log is displayed. log_name'SHOW BINLOG
EVENTS requires the REPLICATION
SLAVE privilege.
The LIMIT clause has the same syntax as for
the SELECT statement. See
Section 13.2.10, “SELECT Statement”.
Issuing a SHOW BINLOG EVENTS
with no LIMIT clause could start a very
time- and resource-consuming process because the server
returns to the client the complete contents of the binary log
(which includes all statements executed by the server that
modify data). As an alternative to SHOW
BINLOG EVENTS, use the
mysqlbinlog utility to save the binary log
to a text file for later examination and analysis. See
Section 4.6.8, “mysqlbinlog — Utility for Processing Binary Log Files”.
SHOW BINLOG EVENTS displays the
following fields for each event in the binary log:
Log_name
The name of the file that is being listed.
Pos
The position at which the event occurs.
Event_type
An identifier that describes the event type.
Server_id
The server ID of the server on which the event originated.
End_log_pos
The position at which the next event begins, which is equal
to Pos plus the size of the event.
Info
More detailed information about the event type. The format of this information depends on the event type.
For compressed transaction payloads, the
Transaction_payload_event is first printed as
a single unit, then it is unpacked and each event inside it is
printed.
Some events relating to the setting of user and system variables
are not included in the output from SHOW
BINLOG EVENTS. To get complete coverage of events
within a binary log, use mysqlbinlog.
SHOW BINLOG EVENTS does
not work with relay log files. You can use
SHOW RELAYLOG EVENTS for this
purpose.
SHOW CHARACTER SET
[LIKE 'pattern' | WHERE expr]
The SHOW CHARACTER SET statement
shows all available character sets. The
LIKE clause, if present, indicates
which character set names to match. The WHERE
clause can be given to select rows using more general
conditions, as discussed in Section 25.50, “Extensions to SHOW Statements”. For
example:
mysql> SHOW CHARACTER SET LIKE 'latin%';
+---------+-----------------------------+-------------------+--------+
| Charset | Description | Default collation | Maxlen |
+---------+-----------------------------+-------------------+--------+
| latin1 | cp1252 West European | latin1_swedish_ci | 1 |
| latin2 | ISO 8859-2 Central European | latin2_general_ci | 1 |
| latin5 | ISO 8859-9 Turkish | latin5_turkish_ci | 1 |
| latin7 | ISO 8859-13 Baltic | latin7_general_ci | 1 |
+---------+-----------------------------+-------------------+--------+
SHOW CHARACTER SET output has
these columns:
Charset
The character set name.
Description
A description of the character set.
Default collation
The default collation for the character set.
Maxlen
The maximum number of bytes required to store one character.
The filename character set is for internal
use only; consequently, SHOW CHARACTER
SET does not display it.
Character set information is also available from the
INFORMATION_SCHEMA
CHARACTER_SETS table. See
Section 25.4, “The INFORMATION_SCHEMA CHARACTER_SETS Table”.
SHOW COLLATION
[LIKE 'pattern' | WHERE expr]
This statement lists collations supported by the server. By
default, the output from SHOW
COLLATION includes all available collations. The
LIKE clause, if present, indicates
which collation names to match. The WHERE
clause can be given to select rows using more general
conditions, as discussed in Section 25.50, “Extensions to SHOW Statements”. For
example:
mysql> SHOW COLLATION WHERE Charset = 'latin1';
+-------------------+---------+----+---------+----------+---------+
| Collation | Charset | Id | Default | Compiled | Sortlen |
+-------------------+---------+----+---------+----------+---------+
| latin1_german1_ci | latin1 | 5 | | Yes | 1 |
| latin1_swedish_ci | latin1 | 8 | Yes | Yes | 1 |
| latin1_danish_ci | latin1 | 15 | | Yes | 1 |
| latin1_german2_ci | latin1 | 31 | | Yes | 2 |
| latin1_bin | latin1 | 47 | | Yes | 1 |
| latin1_general_ci | latin1 | 48 | | Yes | 1 |
| latin1_general_cs | latin1 | 49 | | Yes | 1 |
| latin1_spanish_ci | latin1 | 94 | | Yes | 1 |
+-------------------+---------+----+---------+----------+---------+
SHOW COLLATION output has these
columns:
Collation
The collation name.
Charset
The name of the character set with which the collation is associated.
Id
The collation ID.
Default
Whether the collation is the default for its character set.
Compiled
Whether the character set is compiled into the server.
Sortlen
This is related to the amount of memory required to sort strings expressed in the character set.
To see the default collation for each character set, use the
following statement. Default is a reserved
word, so to use it as an identifier, it must be quoted as such:
mysql> SHOW COLLATION WHERE `Default` = 'Yes';
+---------------------+----------+----+---------+----------+---------+
| Collation | Charset | Id | Default | Compiled | Sortlen |
+---------------------+----------+----+---------+----------+---------+
| big5_chinese_ci | big5 | 1 | Yes | Yes | 1 |
| dec8_swedish_ci | dec8 | 3 | Yes | Yes | 1 |
| cp850_general_ci | cp850 | 4 | Yes | Yes | 1 |
| hp8_english_ci | hp8 | 6 | Yes | Yes | 1 |
| koi8r_general_ci | koi8r | 7 | Yes | Yes | 1 |
| latin1_swedish_ci | latin1 | 8 | Yes | Yes | 1 |
...
Collation information is also available from the
INFORMATION_SCHEMA
COLLATIONS table. See
Section 25.6, “The INFORMATION_SCHEMA COLLATIONS Table”.
SHOW [EXTENDED] [FULL] {COLUMNS | FIELDS}
{FROM | IN} tbl_name
[{FROM | IN} db_name]
[LIKE 'pattern' | WHERE expr]
SHOW COLUMNS displays information
about the columns in a given table. It also works for views.
SHOW COLUMNS displays information
only for those columns for which you have some privilege.
mysql> SHOW COLUMNS FROM City;
+-------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+----------+------+-----+---------+----------------+
| ID | int(11) | NO | PRI | NULL | auto_increment |
| Name | char(35) | NO | | | |
| CountryCode | char(3) | NO | MUL | | |
| District | char(20) | NO | | | |
| Population | int(11) | NO | | 0 | |
+-------------+----------+------+-----+---------+----------------+
An alternative to syntax is
tbl_name
FROM db_namedb_name.tbl_name. These two
statements are equivalent:
SHOW COLUMNS FROM mytable FROM mydb; SHOW COLUMNS FROM mydb.mytable;
The optional EXTENDED keyword causes the
output to include information about hidden columns that MySQL
uses internally and are not accessible by users.
The optional FULL keyword causes the output
to include the column collation and comments, as well as the
privileges you have for each column.
The LIKE clause, if present,
indicates which column names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
The data types may differ from what you expect them to be based
on a CREATE TABLE statement
because MySQL sometimes changes data types when you create or
alter a table. The conditions under which this occurs are
described in Section 13.1.20.7, “Silent Column Specification Changes”.
SHOW COLUMNS displays the
following values for each table column:
Field
The name of the column.
Type
The column data type.
Collation
The collation for nonbinary string columns, or
NULL for other columns. This value is
displayed only if you use the FULL
keyword.
Null
The column nullability. The value is YES
if NULL values can be stored in the
column, NO if not.
Key
Whether the column is indexed:
If Key is empty, the column either is
not indexed or is indexed only as a secondary column in
a multiple-column, nonunique index.
If Key is PRI, the
column is a PRIMARY KEY or is one of
the columns in a multiple-column PRIMARY
KEY.
If Key is UNI, the
column is the first column of a
UNIQUE index. (A
UNIQUE index permits multiple
NULL values, but you can tell whether
the column permits NULL by checking
the Null field.)
If Key is MUL, the
column is the first column of a nonunique index in which
multiple occurrences of a given value are permitted
within the column.
If more than one of the Key values
applies to a given column of a table, Key
displays the one with the highest priority, in the order
PRI, UNI,
MUL.
A UNIQUE index may be displayed as
PRI if it cannot contain
NULL values and there is no
PRIMARY KEY in the table. A
UNIQUE index may display as
MUL if several columns form a composite
UNIQUE index; although the combination of
the columns is unique, each column can still hold multiple
occurrences of a given value.
Default
The default value for the column. This is
NULL if the column has an explicit
default of NULL, or if the column
definition includes no DEFAULT clause.
Extra
Any additional information that is available about a given column. The value is nonempty in these cases:
auto_increment for columns that have
the AUTO_INCREMENT attribute.
on update CURRENT_TIMESTAMP for
TIMESTAMP or
DATETIME columns that
have the ON UPDATE CURRENT_TIMESTAMP
attribute.
VIRTUAL GENERATED or VIRTUAL
STORED for generated columns.
DEFAULT_GENERATED for columns that
have an expression default value.
Privileges
The privileges you have for the column. This value is
displayed only if you use the FULL
keyword.
Comment
Any comment included in the column definition. This value is
displayed only if you use the FULL
keyword.
Table column information is also available from the
INFORMATION_SCHEMA
COLUMNS table. See
Section 25.8, “The INFORMATION_SCHEMA COLUMNS Table”. The extended information about
hidden columns is available only using SHOW EXTENDED
COLUMNS; it cannot be obtained from the
COLUMNS table.
You can list a table's columns with the mysqlshow
db_name
tbl_name command.
The DESCRIBE statement provides
information similar to SHOW
COLUMNS. See Section 13.8.1, “DESCRIBE Statement”.
The SHOW CREATE TABLE,
SHOW TABLE STATUS, and
SHOW INDEX statements also
provide information about tables. See Section 13.7.7, “SHOW Statements”.
SHOW CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name
Shows the CREATE DATABASE
statement that creates the named database. If the
SHOW statement includes an IF NOT
EXISTS clause, the output too includes such a clause.
SHOW
CREATE SCHEMA is a synonym for
SHOW CREATE DATABASE.
mysql>SHOW CREATE DATABASE test\G*************************** 1. row *************************** Database: test Create Database: CREATE DATABASE `test` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80014 DEFAULT ENCRYPTION='N' */ mysql>SHOW CREATE SCHEMA test\G*************************** 1. row *************************** Database: test Create Database: CREATE DATABASE `test` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80014 DEFAULT ENCRYPTION='N' */
SHOW CREATE DATABASE quotes table
and column names according to the value of the
sql_quote_show_create option.
See Section 5.1.8, “Server System Variables”.
SHOW CREATE EVENT event_name
This statement displays the CREATE
EVENT statement needed to re-create a given event. It
requires the EVENT privilege for
the database from which the event is to be shown. For example
(using the same event e_daily defined and
then altered in Section 13.7.7.18, “SHOW EVENTS Statement”):
mysql> SHOW CREATE EVENT myschema.e_daily\G
*************************** 1. row ***************************
Event: e_daily
sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,
NO_ZERO_IN_DATE,NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,
NO_ENGINE_SUBSTITUTION
time_zone: SYSTEM
Create Event: CREATE DEFINER=`jon`@`ghidora` EVENT `e_daily`
ON SCHEDULE EVERY 1 DAY
STARTS CURRENT_TIMESTAMP + INTERVAL 6 HOUR
ON COMPLETION NOT PRESERVE
ENABLE
COMMENT 'Saves total number of sessions then
clears the table each day'
DO BEGIN
INSERT INTO site_activity.totals (time, total)
SELECT CURRENT_TIMESTAMP, COUNT(*)
FROM site_activity.sessions;
DELETE FROM site_activity.sessions;
END
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
Database Collation: utf8mb4_0900_ai_ci
character_set_client is the session value of
the character_set_client system
variable when the event was created.
collation_connection is the session value of
the collation_connection system
variable when the event was created. Database
Collation is the collation of the database with which
the event is associated.
The output reflects the current status of the event
(ENABLE) rather than the status with which it
was created.
SHOW CREATE FUNCTION func_name
This statement is similar to SHOW CREATE
PROCEDURE but for stored functions. See
Section 13.7.7.9, “SHOW CREATE PROCEDURE Statement”.
SHOW CREATE PROCEDURE proc_name
This statement is a MySQL extension. It returns the exact string
that can be used to re-create the named stored procedure. A
similar statement, SHOW CREATE
FUNCTION, displays information about stored functions
(see Section 13.7.7.8, “SHOW CREATE FUNCTION Statement”).
To use either statement, you must be the user named as the
routine DEFINER, have the
SHOW_ROUTINE privilege, have the
SELECT privilege at the global
level, or have the CREATE
ROUTINE, ALTER ROUTINE,
or EXECUTE privilege granted at a
scope that includes the routine. The value displayed for the
Create Procedure or Create
Function field is NULL if you have
only CREATE ROUTINE,
ALTER ROUTINE, or
EXECUTE.
mysql>SHOW CREATE PROCEDURE test.citycount\G*************************** 1. row *************************** Procedure: citycount sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES, NO_ZERO_IN_DATE,NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_ENGINE_SUBSTITUTION Create Procedure: CREATE DEFINER=`me`@`localhost` PROCEDURE `citycount`(IN country CHAR(3), OUT cities INT) BEGIN SELECT COUNT(*) INTO cities FROM world.city WHERE CountryCode = country; END character_set_client: utf8mb4 collation_connection: utf8mb4_0900_ai_ci Database Collation: utf8mb4_0900_ai_ci mysql>SHOW CREATE FUNCTION test.hello\G*************************** 1. row *************************** Function: hello sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES, NO_ZERO_IN_DATE,NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_ENGINE_SUBSTITUTION Create Function: CREATE DEFINER=`me`@`localhost` FUNCTION `hello`(s CHAR(20)) RETURNS char(50) CHARSET utf8mb4 DETERMINISTIC RETURN CONCAT('Hello, ',s,'!') character_set_client: utf8mb4 collation_connection: utf8mb4_0900_ai_ci Database Collation: utf8mb4_0900_ai_ci
character_set_client is the session value of
the character_set_client system
variable when the routine was created.
collation_connection is the session value of
the collation_connection system
variable when the routine was created. Database
Collation is the collation of the database with which
the routine is associated.
SHOW CREATE TABLE tbl_name
Shows the CREATE TABLE statement
that creates the named table. To use this statement, you must
have some privilege for the table. This statement also works
with views.
mysql> SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE `t` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`s` char(60) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
As of MySQL 8.0.16, MySQL implements CHECK
constraints and SHOW CREATE TABLE
displays them. All CHECK constraints are
displayed as table constraints. That is, a
CHECK constraint originally specified as part
of a column definition displays as a separate clause not part of
the column definition. Example:
mysql>CREATE TABLE t1 (i1 INT CHECK (i1 <> 0), -- column constrainti2 INT,CHECK (i2 > i1), -- table constraintCHECK (i2 <> 0) NOT ENFORCED -- table constraint, not enforced);mysql>SHOW CREATE TABLE t1\G*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `i1` int(11) DEFAULT NULL, `i2` int(11) DEFAULT NULL, CONSTRAINT `t1_chk_1` CHECK ((`i1` <> 0)), CONSTRAINT `t1_chk_2` CHECK ((`i2` > `i1`)), CONSTRAINT `t1_chk_3` CHECK ((`i2` <> 0)) /*!80016 NOT ENFORCED */ ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
SHOW CREATE TABLE quotes table
and column names according to the value of the
sql_quote_show_create option.
See Section 5.1.8, “Server System Variables”.
When altering the storage engine of a table, table options that
are not applicable to the new storage engine are retained in the
table definition to enable reverting the table with its
previously defined options to the original storage engine, if
necessary. For example, when changing the storage engine from
InnoDB to MyISAM, InnoDB-specific options such as
ROW_FORMAT=COMPACT are retained.
mysql>CREATE TABLE t1 (c1 INT PRIMARY KEY) ROW_FORMAT=COMPACT ENGINE=InnoDB;mysql>ALTER TABLE t1 ENGINE=MyISAM;mysql>SHOW CREATE TABLE t1\G*************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `c1` int NOT NULL, PRIMARY KEY (`c1`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=COMPACT
When creating a table with
strict mode disabled,
the storage engine's default row format is used if the
specified row format is not supported. The actual row format of
the table is reported in the Row_format
column in response to SHOW TABLE
STATUS. SHOW CREATE
TABLE shows the row format that was specified in the
CREATE TABLE statement.
SHOW CREATE TRIGGER trigger_name
This statement shows the CREATE
TRIGGER statement that creates the named trigger. This
statement requires the TRIGGER
privilege for the table associated with the trigger.
mysql> SHOW CREATE TRIGGER ins_sum\G
*************************** 1. row ***************************
Trigger: ins_sum
sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,
NO_ZERO_IN_DATE,NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,
NO_ENGINE_SUBSTITUTION
SQL Original Statement: CREATE DEFINER=`me`@`localhost` TRIGGER `ins_sum`
BEFORE INSERT ON `account`
FOR EACH ROW SET @sum = @sum + NEW.amount
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
Database Collation: utf8mb4_0900_ai_ci
Created: 2018-08-08 10:10:12.61
SHOW CREATE TRIGGER output has
these columns:
Trigger: The trigger name.
sql_mode: The SQL mode in effect when the
trigger executes.
SQL Original Statement: The
CREATE TRIGGER statement that
defines the trigger.
character_set_client: The session value
of the character_set_client
system variable when the trigger was created.
collation_connection: The session value
of the collation_connection
system variable when the trigger was created.
Database Collation: The collation of the
database with which the trigger is associated.
Created: The date and time when the
trigger was created. This is a
TIMESTAMP(2) value (with a fractional
part in hundredths of seconds) for triggers.
Trigger information is also available from the
INFORMATION_SCHEMA
TRIGGERS table. See
Section 25.41, “The INFORMATION_SCHEMA TRIGGERS Table”.
SHOW CREATE USER user
This statement shows the CREATE
USER statement that creates the named user. An error
occurs if the user does not exist. The statement requires the
SELECT privilege for the
mysql system schema, except to see
information for the current user. For the current user, the
SELECT privilege for the
mysql.user system table is required for
display of the password hash in the IDENTIFIED
AS clause; otherwise, the hash displays as
<secret>.
To name the account, use the format described in
Section 6.2.4, “Specifying Account Names”. The host name part of the
account name, if omitted, defaults to '%'. It
is also possible to specify
CURRENT_USER or
CURRENT_USER() to refer to the
account associated with the current session.
Password hash values displayed in the IDENTIFIED
WITH clause of output from SHOW
CREATE USER may contain unprintable characters that
have adverse effects on terminal displays and in other
environments. Enabling the
print_identified_with_as_hex
system variable (available as of MySQL 8.0.17) causes
SHOW CREATE USER to display such
hash values as hexadecimal strings rather than as regular string
literals. Hash values that do not contain unprintable characters
still display as regular string literals, even with this
variable enabled.
mysql>CREATE USER 'u1'@'localhost' IDENTIFIED BY 'secret';mysql>SET print_identified_with_as_hex = ON;mysql>SHOW CREATE USER 'u1'@'localhost'\G*************************** 1. row *************************** CREATE USER for u1@localhost: CREATE USER 'u1'@'localhost' IDENTIFIED WITH 'caching_sha2_password' AS 0x244124303035240C7745603626313D613C4C10633E0A104B1E14135A544A7871567245614F4872344643546336546F624F6C7861326932752F45622F4F473273597557627139 REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT
To display the privileges granted to an account, use the
SHOW GRANTS statement. See
Section 13.7.7.21, “SHOW GRANTS Statement”.
SHOW CREATE VIEW view_name
This statement shows the CREATE
VIEW statement that creates the named view.
mysql> SHOW CREATE VIEW v\G
*************************** 1. row ***************************
View: v
Create View: CREATE ALGORITHM=UNDEFINED
DEFINER=`bob`@`localhost`
SQL SECURITY DEFINER VIEW
`v` AS select 1 AS `a`,2 AS `b`
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
character_set_client is the session value of
the character_set_client system
variable when the view was created.
collation_connection is the session value of
the collation_connection system
variable when the view was created.
Use of SHOW CREATE VIEW requires
the SHOW VIEW privilege, and the
SELECT privilege for the view in
question.
View information is also available from the
INFORMATION_SCHEMA
VIEWS table. See
Section 25.44, “The INFORMATION_SCHEMA VIEWS Table”.
MySQL lets you use different
sql_mode settings to tell the
server the type of SQL syntax to support. For example, you might
use the ANSI SQL mode to
ensure MySQL correctly interprets the standard SQL concatenation
operator, the double bar (||), in your
queries. If you then create a view that concatenates items, you
might worry that changing the
sql_mode setting to a value
different from ANSI could
cause the view to become invalid. But this is not the case. No
matter how you write out a view definition, MySQL always stores
it the same way, in a canonical form. Here is an example that
shows how the server changes a double bar concatenation operator
to a CONCAT() function:
mysql>SET sql_mode = 'ANSI';Query OK, 0 rows affected (0.00 sec) mysql>CREATE VIEW test.v AS SELECT 'a' || 'b' as col1;Query OK, 0 rows affected (0.01 sec) mysql>SHOW CREATE VIEW test.v\G*************************** 1. row *************************** View: v Create View: CREATE VIEW "v" AS select concat('a','b') AS "col1" ... 1 row in set (0.00 sec)
The advantage of storing a view definition in canonical form is
that changes made later to the value of
sql_mode will not affect the
results from the view. However an additional consequence is that
comments prior to SELECT are
stripped from the definition by the server.
SHOW {DATABASES | SCHEMAS}
[LIKE 'pattern' | WHERE expr]
SHOW DATABASES lists the
databases on the MySQL server host.
SHOW
SCHEMAS is a synonym for SHOW
DATABASES. The LIKE
clause, if present, indicates which database names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
You see only those databases for which you have some kind of
privilege, unless you have the global SHOW
DATABASES privilege. You can also get this list using
the mysqlshow command.
If the server was started with the
--skip-show-database option, you
cannot use this statement at all unless you have the
SHOW DATABASES privilege.
MySQL implements databases as directories in the data directory, so this statement simply lists directories in that location. However, the output may include names of directories that do not correspond to actual databases.
Database information is also available from the
INFORMATION_SCHEMA
SCHEMATA table. See
Section 25.30, “The INFORMATION_SCHEMA SCHEMATA Table”.
Because any static global privilege is considered a privilege
for all databases, any static global privilege enables a user
to see all database names with SHOW
DATABASES or by examining the
SCHEMATA table of
INFORMATION_SCHEMA, except databases that
have been restricted at the database level by partial revokes.
SHOW ENGINE engine_name {STATUS | MUTEX}
SHOW ENGINE displays operational
information about a storage engine. It requires the
PROCESS privilege. The statement
has these variants:
SHOW ENGINE INNODB STATUS SHOW ENGINE INNODB MUTEX SHOW ENGINE PERFORMANCE_SCHEMA STATUS
SHOW ENGINE INNODB
STATUS displays extensive information from the
standard InnoDB Monitor about the state of
the InnoDB storage engine. For information
about the standard monitor and other InnoDB
Monitors that provide information about
InnoDB processing, see
Section 15.17, “InnoDB Monitors”.
SHOW ENGINE INNODB
MUTEX displays InnoDB
mutex and
rw-lock statistics.
InnoDB mutexes and rwlocks can also be
monitored using Performance
Schema tables. See
Section 15.16.2, “Monitoring InnoDB Mutex Waits Using Performance Schema”.
Mutex statistics collection is configured dynamically using the following options:
To enable the collection of mutex statistics, run:
SET GLOBAL innodb_monitor_enable='latch';
To reset mutex statistics, run:
SET GLOBAL innodb_monitor_reset='latch';
To disable the collection of mutex statistics, run:
SET GLOBAL innodb_monitor_disable='latch';
Collection of mutex statistics for
SHOW ENGINE INNODB
MUTEX can also be enabled by setting
innodb_monitor_enable='all', or
disabled by setting
innodb_monitor_disable='all'.
SHOW ENGINE INNODB
MUTEX output has these columns:
Type
Always InnoDB.
Name
For mutexes, the Name field reports only
the mutex name. For rwlocks, the Name
field reports the source file where the rwlock is
implemented, and the line number in the file where the
rwlock is created. The line number is specific to your
version of MySQL.
Status
The mutex status. This field reports the number of spins,
waits, and calls. Statistics for low-level operating system
mutexes, which are implemented outside of
InnoDB, are not reported.
spins indicates the number of spins.
waits indicates the number of mutex
waits.
calls indicates how many times the
mutex was requested.
SHOW ENGINE INNODB MUTEX does not list
mutexes and rw-locks for each buffer pool block, as the amount
of output would be overwhelming on systems with a large buffer
pool. SHOW ENGINE INNODB MUTEX does, however,
print aggregate BUF_BLOCK_MUTEX spin, wait,
and call values for buffer pool block mutexes and rw-locks.
SHOW ENGINE INNODB MUTEX also does not list
any mutexes or rw-locks that have never been waited on
(os_waits=0). Thus, SHOW ENGINE
INNODB MUTEX only displays information about mutexes
and rw-locks outside of the buffer pool that have caused at
least one OS-level wait.
Use SHOW ENGINE
PERFORMANCE_SCHEMA STATUS to inspect the internal
operation of the Performance Schema code:
mysql> SHOW ENGINE PERFORMANCE_SCHEMA STATUS\G
...
*************************** 3. row ***************************
Type: performance_schema
Name: events_waits_history.size
Status: 76
*************************** 4. row ***************************
Type: performance_schema
Name: events_waits_history.count
Status: 10000
*************************** 5. row ***************************
Type: performance_schema
Name: events_waits_history.memory
Status: 760000
...
*************************** 57. row ***************************
Type: performance_schema
Name: performance_schema.memory
Status: 26459600
...
This statement is intended to help the DBA understand the effects that different Performance Schema options have on memory requirements.
Name values consist of two parts, which name
an internal buffer and a buffer attribute, respectively.
Interpret buffer names as follows:
An internal buffer that is not exposed as a table is named
within parentheses. Examples:
(pfs_cond_class).size,
(pfs_mutex_class).memory.
An internal buffer that is exposed as a table in the
performance_schema database is named
after the table, without parentheses. Examples:
events_waits_history.size,
mutex_instances.count.
A value that applies to the Performance Schema as a whole
begins with performance_schema. Example:
performance_schema.memory.
Buffer attributes have these meanings:
size is the size of the internal record
used by the implementation, such as the size of a row in a
table. size values cannot be changed.
count is the number of internal records,
such as the number of rows in a table.
count values can be changed using
Performance Schema configuration options.
For a table,
is the product of tbl_name.memorysize and
count. For the Performance Schema as a
whole, performance_schema.memory is the
sum of all the memory used (the sum of all other
memory values).
In some cases, there is a direct relationship between a
Performance Schema configuration parameter and a SHOW
ENGINE value. For example,
events_waits_history_long.count corresponds
to
performance_schema_events_waits_history_long_size.
In other cases, the relationship is more complex. For example,
events_waits_history.count corresponds to
performance_schema_events_waits_history_size
(the number of rows per thread) multiplied by
performance_schema_max_thread_instances
( the number of threads).
SHOW ENGINE NDB STATUS.
If the server has the NDB storage
engine enabled, SHOW ENGINE NDB STATUS
displays cluster status information such as the number of
connected data nodes, the cluster connectstring, and cluster
binary log epochs, as well as counts of various Cluster API
objects created by the MySQL Server when connected to the
cluster. Sample output from this statement is shown here:
mysql> SHOW ENGINE NDB STATUS;
+------------+-----------------------+--------------------------------------------------+
| Type | Name | Status |
+------------+-----------------------+--------------------------------------------------+
| ndbcluster | connection | cluster_node_id=7,
connected_host=198.51.100.103, connected_port=1186, number_of_data_nodes=4,
number_of_ready_data_nodes=3, connect_count=0 |
| ndbcluster | NdbTransaction | created=6, free=0, sizeof=212 |
| ndbcluster | NdbOperation | created=8, free=8, sizeof=660 |
| ndbcluster | NdbIndexScanOperation | created=1, free=1, sizeof=744 |
| ndbcluster | NdbIndexOperation | created=0, free=0, sizeof=664 |
| ndbcluster | NdbRecAttr | created=1285, free=1285, sizeof=60 |
| ndbcluster | NdbApiSignal | created=16, free=16, sizeof=136 |
| ndbcluster | NdbLabel | created=0, free=0, sizeof=196 |
| ndbcluster | NdbBranch | created=0, free=0, sizeof=24 |
| ndbcluster | NdbSubroutine | created=0, free=0, sizeof=68 |
| ndbcluster | NdbCall | created=0, free=0, sizeof=16 |
| ndbcluster | NdbBlob | created=1, free=1, sizeof=264 |
| ndbcluster | NdbReceiver | created=4, free=0, sizeof=68 |
| ndbcluster | binlog | latest_epoch=155467, latest_trans_epoch=148126,
latest_received_binlog_epoch=0, latest_handled_binlog_epoch=0,
latest_applied_binlog_epoch=0 |
+------------+-----------------------+--------------------------------------------------+
The Status column in each of these rows
provides information about the MySQL server's connection to
the cluster and about the cluster binary log's status,
respectively. The Status information is in
the form of comma-delimited set of name/value pairs.
The connection row's
Status column contains the name/value pairs
described in the following table.
| Name | Value |
|---|---|
cluster_node_id |
The node ID of the MySQL server in the cluster |
connected_host |
The host name or IP address of the cluster management server to which the MySQL server is connected |
connected_port |
The port used by the MySQL server to connect to the management server
(connected_host) |
number_of_data_nodes |
The number of data nodes configured for the cluster (that is, the number
of [ndbd] sections in the cluster
config.ini file) |
number_of_ready_data_nodes |
The number of data nodes in the cluster that are actually running |
connect_count |
The number of times this mysqld has connected or reconnected to cluster data nodes |
The binlog row's
Status column contains information relating
to NDB Cluster Replication. The name/value pairs it contains are
described in the following table.
| Name | Value |
|---|---|
latest_epoch |
The most recent epoch most recently run on this MySQL server (that is, the sequence number of the most recent transaction run on the server) |
latest_trans_epoch |
The most recent epoch processed by the cluster's data nodes |
latest_received_binlog_epoch |
The most recent epoch received by the binary log thread |
latest_handled_binlog_epoch |
The most recent epoch processed by the binary log thread (for writing to the binary log) |
latest_applied_binlog_epoch |
The most recent epoch actually written to the binary log |
See Section 22.6, “NDB Cluster Replication”, for more information.
The remaining rows from the output of SHOW ENGINE NDB
STATUS which are most likely to prove useful in
monitoring the cluster are listed here by
Name:
NdbTransaction: The number and size of
NdbTransaction objects that have been
created. An NdbTransaction is created
each time a table schema operation (such as
CREATE TABLE or
ALTER TABLE) is performed on
an NDB table.
NdbOperation: The number and size of
NdbOperation objects that have been
created.
NdbIndexScanOperation: The number and
size of NdbIndexScanOperation objects
that have been created.
NdbIndexOperation: The number and size of
NdbIndexOperation objects that have been
created.
NdbRecAttr: The number and size of
NdbRecAttr objects that have been
created. In general, one of these is created each time a
data manipulation statement is performed by an SQL node.
NdbBlob: The number and size of
NdbBlob objects that have been created.
An NdbBlob is created for each new
operation involving a BLOB
column in an NDB table.
NdbReceiver: The number and size of any
NdbReceiver object that have been
created. The number in the created column
is the same as the number of data nodes in the cluster to
which the MySQL server has connected.
SHOW ENGINE NDB STATUS returns an empty
result if no operations involving
NDB tables have been performed
during the current session by the MySQL client accessing the
SQL node on which this statement is run.
SHOW [STORAGE] ENGINES
SHOW ENGINES displays status
information about the server's storage engines. This is
particularly useful for checking whether a storage engine is
supported, or to see what the default engine is.
For information about MySQL storage engines, see Chapter 15, The InnoDB Storage Engine, and Chapter 16, Alternative Storage Engines.
mysql> SHOW ENGINES\G
*************************** 1. row ***************************
Engine: ARCHIVE
Support: YES
Comment: Archive storage engine
Transactions: NO
XA: NO
Savepoints: NO
*************************** 2. row ***************************
Engine: BLACKHOLE
Support: YES
Comment: /dev/null storage engine (anything you write to it disappears)
Transactions: NO
XA: NO
Savepoints: NO
*************************** 3. row ***************************
Engine: MRG_MYISAM
Support: YES
Comment: Collection of identical MyISAM tables
Transactions: NO
XA: NO
Savepoints: NO
*************************** 4. row ***************************
Engine: FEDERATED
Support: NO
Comment: Federated MySQL storage engine
Transactions: NULL
XA: NULL
Savepoints: NULL
*************************** 5. row ***************************
Engine: MyISAM
Support: YES
Comment: MyISAM storage engine
Transactions: NO
XA: NO
Savepoints: NO
*************************** 6. row ***************************
Engine: PERFORMANCE_SCHEMA
Support: YES
Comment: Performance Schema
Transactions: NO
XA: NO
Savepoints: NO
*************************** 7. row ***************************
Engine: InnoDB
Support: DEFAULT
Comment: Supports transactions, row-level locking, and foreign keys
Transactions: YES
XA: YES
Savepoints: YES
*************************** 8. row ***************************
Engine: MEMORY
Support: YES
Comment: Hash based, stored in memory, useful for temporary tables
Transactions: NO
XA: NO
Savepoints: NO
*************************** 9. row ***************************
Engine: CSV
Support: YES
Comment: CSV storage engine
Transactions: NO
XA: NO
Savepoints: NO
The output from SHOW ENGINES may
vary according to the MySQL version used and other factors.
SHOW ENGINES output has these
columns:
Engine
The name of the storage engine.
Support
The server's level of support for the storage engine, as shown in the following table.
| Value | Meaning |
|---|---|
YES |
The engine is supported and is active |
DEFAULT |
Like YES, plus this is the default engine |
NO |
The engine is not supported |
DISABLED |
The engine is supported but has been disabled |
A value of NO means that the server was
compiled without support for the engine, so it cannot be
enabled at runtime.
A value of DISABLED occurs either because
the server was started with an option that disables the
engine, or because not all options required to enable it
were given. In the latter case, the error log should contain
a reason indicating why the option is disabled. See
Section 5.4.2, “The Error Log”.
You might also see DISABLED for a storage
engine if the server was compiled to support it, but was
started with a
--skip-
option. For the engine_nameNDB storage
engine, DISABLED means the server was
compiled with support for NDB Cluster, but was not started
with the --ndbcluster option.
All MySQL servers support MyISAM tables.
It is not possible to disable MyISAM.
Comment
A brief description of the storage engine.
Transactions
Whether the storage engine supports transactions.
XA
Whether the storage engine supports XA transactions.
Savepoints
Whether the storage engine supports savepoints.
Storage engine information is also available from the
INFORMATION_SCHEMA
ENGINES table. See
Section 25.12, “The INFORMATION_SCHEMA ENGINES Table”.
SHOW ERRORS [LIMIT [offset,]row_count] SHOW COUNT(*) ERRORS
SHOW ERRORS is a diagnostic
statement that is similar to SHOW
WARNINGS, except that it displays information only for
errors, rather than for errors, warnings, and notes.
The LIMIT clause has the same syntax as for
the SELECT statement. See
Section 13.2.10, “SELECT Statement”.
The SHOW COUNT(*)
ERRORS statement displays the number of errors. You
can also retrieve this number from the
error_count variable:
SHOW COUNT(*) ERRORS; SELECT @@error_count;
SHOW ERRORS and
error_count apply only to
errors, not warnings or notes. In other respects, they are
similar to SHOW WARNINGS and
warning_count. In particular,
SHOW ERRORS cannot display
information for more than
max_error_count messages, and
error_count can exceed the
value of max_error_count if the
number of errors exceeds
max_error_count.
For more information, see Section 13.7.7.40, “SHOW WARNINGS Statement”.
SHOW EVENTS
[{FROM | IN} schema_name]
[LIKE 'pattern' | WHERE expr]
This statement displays information about Event Manager events,
which are discussed in Section 24.4, “Using the Event Scheduler”. It
requires the EVENT privilege for
the database from which the events are to be shown.
In its simplest form, SHOW EVENTS
lists all of the events in the current schema:
mysql>SELECT CURRENT_USER(), SCHEMA();+----------------+----------+ | CURRENT_USER() | SCHEMA() | +----------------+----------+ | jon@ghidora | myschema | +----------------+----------+ 1 row in set (0.00 sec) mysql>SHOW EVENTS\G*************************** 1. row *************************** Db: myschema Name: e_daily Definer: jon@ghidora Time zone: SYSTEM Type: RECURRING Execute at: NULL Interval value: 1 Interval field: DAY Starts: 2018-08-08 11:06:34 Ends: NULL Status: ENABLED Originator: 1 character_set_client: utf8mb4 collation_connection: utf8mb4_0900_ai_ci Database Collation: utf8mb4_0900_ai_ci
To see events for a specific schema, use the
FROM clause. For example, to see events for
the test schema, use the following statement:
SHOW EVENTS FROM test;
The LIKE clause, if present,
indicates which event names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
SHOW EVENTS output has these
columns:
Db
The name of the schema (database) to which the event belongs.
Name
The name of the event.
Definer
The account of the user who created the event, in
'
format.
user_name'@'host_name'
Time zone
The event time zone, which is the time zone used for
scheduling the event and that is in effect within the event
as it executes. The default value is
SYSTEM.
Type
The event repetition type, either ONE
TIME (transient) or RECURRING
(repeating).
Execute At
For a one-time event, this is the
DATETIME value specified in
the AT clause of the
CREATE EVENT statement used
to create the event, or of the last
ALTER EVENT statement that
modified the event. The value shown in this column reflects
the addition or subtraction of any
INTERVAL value included in the event's
AT clause. For example, if an event is
created using ON SCHEDULE AT CURRENT_TIMESTAMP +
'1:6' DAY_HOUR, and the event was created at
2018-02-09 14:05:30, the value shown in this column would be
'2018-02-10 20:05:30'. If the event's
timing is determined by an EVERY clause
instead of an AT clause (that is, if the
event is recurring), the value of this column is
NULL.
Interval Value
For a recurring event, the number of intervals to wait
between event executions. For a transient event, the value
of this column is always NULL.
Interval Field
The time units used for the interval which a recurring event
waits before repeating. For a transient event, the value of
this column is always NULL.
Starts
The start date and time for a recurring event. This is
displayed as a DATETIME
value, and is NULL if no start date and
time are defined for the event. For a transient event, this
column is always NULL. For a recurring
event whose definition includes a STARTS
clause, this column contains the corresponding
DATETIME value. As with the
Execute At column, this value resolves
any expressions used. If there is no
STARTS clause affecting the timing of the
event, this column is NULL
Ends
For a recurring event whose definition includes a
ENDS clause, this column contains the
corresponding DATETIME value.
As with the Execute At column, this value
resolves any expressions used. If there is no
ENDS clause affecting the timing of the
event, this column is NULL.
Status
The event status. One of ENABLED,
DISABLED, or
SLAVESIDE_DISABLED.
SLAVESIDE_DISABLED indicates that the
creation of the event occurred on another MySQL server
acting as a replication master and replicated to the current
MySQL server which is acting as a slave, but the event is
not presently being executed on the slave. For more
information, see
Section 17.5.1.16, “Replication of Invoked Features”. information.
Originator
The server ID of the MySQL server on which the event was
created; used in replication. This value may be updated by
ALTER EVENT to the server ID
of the server on which that statement occurs, if executed on
a master server. The default value is 0.
character_set_client
The session value of the
character_set_client system
variable when the event was created.
collation_connection
The session value of the
collation_connection system
variable when the event was created.
Database Collation
The collation of the database with which the event is associated.
For more information about SLAVESIDE_DISABLED
and the Originator column, see
Section 17.5.1.16, “Replication of Invoked Features”.
Times displayed by SHOW EVENTS
are given in the event time zone, as discussed in
Section 24.4.4, “Event Metadata”.
Event information is also available from the
INFORMATION_SCHEMA
EVENTS table. See
Section 25.13, “The INFORMATION_SCHEMA EVENTS Table”.
The event action statement is not shown in the output of
SHOW EVENTS. Use
SHOW CREATE EVENT or the
INFORMATION_SCHEMA
EVENTS table.
SHOW FUNCTION CODE func_name
This statement is similar to SHOW PROCEDURE
CODE but for stored functions. See
Section 13.7.7.27, “SHOW PROCEDURE CODE Statement”.
SHOW FUNCTION STATUS
[LIKE 'pattern' | WHERE expr]
This statement is similar to SHOW PROCEDURE
STATUS but for stored functions. See
Section 13.7.7.28, “SHOW PROCEDURE STATUS Statement”.
SHOW GRANTS
[FOR user_or_role
[USING role [, role] ...]]
user_or_role: {
user (see Section 6.2.4, “Specifying Account Names”)
| role (see Section 6.2.5, “Specifying Role Names”.
}
This statement displays the privileges and roles that are
assigned to a MySQL user account or role, in the form of
GRANT statements that must be
executed to duplicate the privilege and role assignments.
To display nonprivilege information for MySQL accounts, use
the SHOW CREATE USER statement.
See Section 13.7.7.12, “SHOW CREATE USER Statement”.
SHOW GRANTS requires the
SELECT privilege for the
mysql system schema, except to display
privileges and roles for the current user.
To name the account or role for SHOW
GRANTS, use the same format as for the
GRANT statement (for example,
'jeffrey'@'localhost'):
mysql> SHOW GRANTS FOR 'jeffrey'@'localhost';
+------------------------------------------------------------------+
| Grants for jeffrey@localhost |
+------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `jeffrey`@`localhost` |
| GRANT SELECT, INSERT, UPDATE ON `db1`.* TO `jeffrey`@`localhost` |
+------------------------------------------------------------------+
The host part, if omitted, defaults to '%'.
For additional information about specifying account and role
names, see Section 6.2.4, “Specifying Account Names”, and
Section 6.2.5, “Specifying Role Names”.
To display the privileges granted to the current user (the account you are using to connect to the server), you can use any of the following statements:
SHOW GRANTS; SHOW GRANTS FOR CURRENT_USER; SHOW GRANTS FOR CURRENT_USER();
If SHOW GRANTS FOR CURRENT_USER (or any
equivalent syntax) is used in definer context, such as within a
stored procedure that executes with definer rather than invoker
privileges, the grants displayed are those of the definer and
not the invoker.
In MySQL 8.0 compared to previous series,
SHOW GRANTS no longer displays
ALL PRIVILEGES in
its global-privileges output because the meaning of
ALL PRIVILEGES at
the global level varies depending on which dynamic privileges
are defined. Instead, SHOW GRANTS
explictly lists each granted global privilege:
mysql> SHOW GRANTS FOR 'root'@'localhost';
+---------------------------------------------------------------------+
| Grants for root@localhost |
+---------------------------------------------------------------------+
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, |
| SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, |
| SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION |
| SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, |
| ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, |
| CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT |
| OPTION |
| GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION |
+---------------------------------------------------------------------+
Applications that process SHOW
GRANTS output should be adjusted accordingly.
At the global level, GRANT OPTION
applies to all granted static global privileges if granted for
any of them, but applies individually to granted dynamic
privileges. SHOW GRANTS displays
global privileges this way:
One line listing all granted static privileges, if there are
any, including WITH GRANT OPTION if
appropriate.
One line listing all granted dynamic privileges for which
GRANT OPTION is granted, if
there are any, including WITH GRANT
OPTION.
One line listing all granted dynamic privileges for which
GRANT OPTION is not granted,
if there are any, without WITH GRANT
OPTION.
With the optional USING clause,
SHOW GRANTS enables you to
examine the privileges associated with roles for the user. Each
role named in the USING clause must be
granted to the user.
Suppose that user u1 is assigned roles
r1 and r2, as follows:
CREATE ROLE 'r1', 'r2'; GRANT SELECT ON db1.* TO 'r1'; GRANT INSERT, UPDATE, DELETE ON db1.* TO 'r2'; CREATE USER 'u1'@'localhost' IDENTIFIED BY 'u1pass'; GRANT 'r1', 'r2' TO 'u1'@'localhost';
SHOW GRANTS without
USING shows the granted roles:
mysql> SHOW GRANTS FOR 'u1'@'localhost';
+---------------------------------------------+
| Grants for u1@localhost |
+---------------------------------------------+
| GRANT USAGE ON *.* TO `u1`@`localhost` |
| GRANT `r1`@`%`,`r2`@`%` TO `u1`@`localhost` |
+---------------------------------------------+
Adding a USING clause causes the statement to
also display the privileges associated with each role named in
the clause:
mysql>SHOW GRANTS FOR 'u1'@'localhost' USING 'r1';+---------------------------------------------+ | Grants for u1@localhost | +---------------------------------------------+ | GRANT USAGE ON *.* TO `u1`@`localhost` | | GRANT SELECT ON `db1`.* TO `u1`@`localhost` | | GRANT `r1`@`%`,`r2`@`%` TO `u1`@`localhost` | +---------------------------------------------+ mysql>SHOW GRANTS FOR 'u1'@'localhost' USING 'r2';+-------------------------------------------------------------+ | Grants for u1@localhost | +-------------------------------------------------------------+ | GRANT USAGE ON *.* TO `u1`@`localhost` | | GRANT INSERT, UPDATE, DELETE ON `db1`.* TO `u1`@`localhost` | | GRANT `r1`@`%`,`r2`@`%` TO `u1`@`localhost` | +-------------------------------------------------------------+ mysql>SHOW GRANTS FOR 'u1'@'localhost' USING 'r1', 'r2';+---------------------------------------------------------------------+ | Grants for u1@localhost | +---------------------------------------------------------------------+ | GRANT USAGE ON *.* TO `u1`@`localhost` | | GRANT SELECT, INSERT, UPDATE, DELETE ON `db1`.* TO `u1`@`localhost` | | GRANT `r1`@`%`,`r2`@`%` TO `u1`@`localhost` | +---------------------------------------------------------------------+
A privilege granted to an account is always in effect, but a
role is not. The active roles for an account can differ across
and within sessions, depending on the value of the
activate_all_roles_on_login
system variable, the account default roles, and whether
SET ROLE has been executed
within a session.
MySQL 8.0.16 and higher supports partial revokes of global
privileges, such that a global privilege can be restricted from
applying to particular schemas (see
Section 6.2.12, “Privilege Restriction Using Partial Revokes”). To indicate which global
schema privileges have been revoked for particular schemas,
SHOW GRANTS output includes
REVOKE statements:
mysql>SET PERSIST partial_revokes = ON;mysql>CREATE USER u1;mysql>GRANT SELECT, INSERT, DELETE ON *.* TO u1;mysql>REVOKE SELECT, INSERT ON mysql.* FROM u1;mysql>REVOKE DELETE ON world.* FROM u1;mysql>SHOW GRANTS FOR u1;+--------------------------------------------------+ | Grants for u1@% | +--------------------------------------------------+ | GRANT SELECT, INSERT, DELETE ON *.* TO `u1`@`%` | | REVOKE SELECT, INSERT ON `mysql`.* FROM `u1`@`%` | | REVOKE DELETE ON `world`.* FROM `u1`@`%` | +--------------------------------------------------+
SHOW GRANTS does not display
privileges that are available to the named account but are
granted to a different account. For example, if an anonymous
account exists, the named account might be able to use its
privileges, but SHOW GRANTS does
not display them.
SHOW GRANTS displays mandatory
roles named in the
mandatory_roles system variable
value as follows:
SHOW GRANTS without a
FOR clause displays privileges for the
current user, and includes mandatory roles.
SHOW GRANTS FOR
displays
privileges for the named user, and does not include
mandatory roles.
user
This behavior is for the benefit of applications that use the
output of SHOW
GRANTS FOR to
determine which privileges are granted explicitly to the named
user. Were that output to include mandatory roles, it would be
difficult to distinguish roles granted explicitly to the user
from mandatory roles.
user
For the current user, applications can determine privileges with
or without mandatory roles by using SHOW
GRANTS or
SHOW GRANTS FOR
CURRENT_USER, respectively.
SHOW [EXTENDED] {INDEX | INDEXES | KEYS}
{FROM | IN} tbl_name
[{FROM | IN} db_name]
[WHERE expr]
SHOW INDEX returns table index
information. The format resembles that of the
SQLStatistics call in ODBC. This statement
requires some privilege for any column in the table.
mysql> SHOW INDEX FROM City\G
*************************** 1. row ***************************
Table: city
Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
Column_name: ID
Collation: A
Cardinality: 4188
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
*************************** 2. row ***************************
Table: city
Non_unique: 1
Key_name: CountryCode
Seq_in_index: 1
Column_name: CountryCode
Collation: A
Cardinality: 232
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
An alternative to syntax is
tbl_name
FROM db_namedb_name.tbl_name.
These two statements are equivalent:
SHOW INDEX FROM mytable FROM mydb; SHOW INDEX FROM mydb.mytable;
The optional EXTENDED keyword causes the
output to include information about hidden indexes that MySQL
uses internally and are not accessible by users.
The WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
SHOW INDEX returns the following
fields:
Table
The name of the table.
Non_unique
0 if the index cannot contain duplicates, 1 if it can.
Key_name
The name of the index. If the index is the primary key, the
name is always PRIMARY.
Seq_in_index
The column sequence number in the index, starting with 1.
Column_name
The column name. See also the description for the
Expression column.
Collation
How the column is sorted in the index. This can have values
A (ascending), D
(descending), or NULL (not sorted).
Cardinality
An estimate of the number of unique values in the index. To
update this number, run ANALYZE
TABLE or (for MyISAM tables)
myisamchk -a.
Cardinality is counted based on
statistics stored as integers, so the value is not
necessarily exact even for small tables. The higher the
cardinality, the greater the chance that MySQL uses the
index when doing joins.
Sub_part
The index prefix. That is, the number of indexed characters
if the column is only partly indexed,
NULL if the entire column is indexed.
Prefix limits are measured in bytes.
However, prefix lengths for index
specifications in CREATE
TABLE, ALTER
TABLE, and CREATE
INDEX statements are interpreted as number of
characters for nonbinary string types
(CHAR,
VARCHAR,
TEXT) and number of bytes
for binary string types
(BINARY,
VARBINARY,
BLOB). Take this into
account when specifying a prefix length for a nonbinary
string column that uses a multibyte character set.
For additional information about index prefixes, see Section 8.3.5, “Column Indexes”, and Section 13.1.15, “CREATE INDEX Statement”.
Packed
Indicates how the key is packed. NULL if
it is not.
Null
Contains YES if the column may contain
NULL values and '' if
not.
Index_type
The index method used (BTREE,
FULLTEXT, HASH,
RTREE).
Comment
Information about the index not described in its own column,
such as disabled if the index is
disabled.
Index_comment
Any comment provided for the index with a
COMMENT attribute when the index was
created.
Visible
Whether the index is visible to the optimizer. See Section 8.3.12, “Invisible Indexes”.
Expression
MySQL 8.0.13 and higher supports functional key parts (see
Functional Key Parts), which
affects both the Column_name and
Expression columns:
For a nonfunctional key part,
Column_name indicates the column
indexed by the key part and
Expression is
NULL.
For a functional key part,
Column_name column is
NULL and
Expression indicates the expression
for the key part.
Information about table indexes is also available from the
INFORMATION_SCHEMA
STATISTICS table. See
Section 25.33, “The INFORMATION_SCHEMA STATISTICS Table”. The extended information
about hidden indexes is available only using SHOW
EXTENDED INDEX; it cannot be obtained from the
STATISTICS table.
You can list a table's indexes with the mysqlshow -k
db_name
tbl_name command.
SHOW MASTER STATUS
This statement provides status information about the binary log
files of the master. It requires the
REPLICATION CLIENT privilege (or
the deprecated SUPER privilege).
Example:
mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
File: master-bin.000002
Position: 1307
Binlog_Do_DB: test
Binlog_Ignore_DB: manual, mysql
Executed_Gtid_Set: 3E11FA47-71CA-11E1-9E33-C80AA9429562:1-5
1 row in set (0.00 sec)
When global transaction IDs are in use,
Executed_Gtid_Set shows the set of GTIDs for
transactions that have been executed on the master. This is the
same as the value for the
gtid_executed system variable
on this server, as well as the value for
Executed_Gtid_Set in the output of
SHOW SLAVE STATUS on this server.
SHOW OPEN TABLES
[{FROM | IN} db_name]
[LIKE 'pattern' | WHERE expr]
SHOW OPEN TABLES lists the
non-TEMPORARY tables that are currently open
in the table cache. See Section 8.4.3.1, “How MySQL Opens and Closes Tables”. The
FROM clause, if present, restricts the tables
shown to those present in the db_name
database. The LIKE clause, if
present, indicates which table names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
SHOW OPEN TABLES output has these
columns:
Database
The database containing the table.
Table
The table name.
In_use
The number of table locks or lock requests there are for the
table. For example, if one client acquires a lock for a
table using LOCK TABLE t1 WRITE,
In_use will be 1. If another client
issues LOCK TABLE t1 WRITE while the
table remains locked, the client will block waiting for the
lock, but the lock request causes In_use
to be 2. If the count is zero, the table is open but not
currently being used. In_use is also
increased by the
HANDLER ...
OPEN statement and decreased by
HANDLER ...
CLOSE.
Name_locked
Whether the table name is locked. Name locking is used for operations such as dropping or renaming tables.
If you have no privileges for a table, it does not show up in
the output from SHOW OPEN TABLES.
SHOW PLUGINS
SHOW PLUGINS displays information
about server plugins.
Example of SHOW PLUGINS output:
mysql> SHOW PLUGINS\G
*************************** 1. row ***************************
Name: binlog
Status: ACTIVE
Type: STORAGE ENGINE
Library: NULL
License: GPL
*************************** 2. row ***************************
Name: CSV
Status: ACTIVE
Type: STORAGE ENGINE
Library: NULL
License: GPL
*************************** 3. row ***************************
Name: MEMORY
Status: ACTIVE
Type: STORAGE ENGINE
Library: NULL
License: GPL
*************************** 4. row ***************************
Name: MyISAM
Status: ACTIVE
Type: STORAGE ENGINE
Library: NULL
License: GPL
...
SHOW PLUGINS output has these
columns:
Name
The name used to refer to the plugin in statements such as
INSTALL PLUGIN and
UNINSTALL PLUGIN.
Status
The plugin status, one of ACTIVE,
INACTIVE, DISABLED,
DELETING, or DELETED.
Type
The type of plugin, such as STORAGE
ENGINE, INFORMATION_SCHEMA, or
AUTHENTICATION.
Library
The name of the plugin shared library file. This is the name
used to refer to the plugin file in statements such as
INSTALL PLUGIN and
UNINSTALL PLUGIN. This file
is located in the directory named by the
plugin_dir system variable.
If the library name is NULL, the plugin
is compiled in and cannot be uninstalled with
UNINSTALL PLUGIN.
License
How the plugin is licensed (for example,
GPL).
For plugins installed with INSTALL
PLUGIN, the Name and
Library values are also registered in the
mysql.plugin system table.
For information about plugin data structures that form the basis
of the information displayed by SHOW
PLUGINS, see Section 29.2, “The MySQL Plugin API”.
Plugin information is also available from the
INFORMATION_SCHEMA
.PLUGINS table. See
Section 25.21, “The INFORMATION_SCHEMA PLUGINS Table”.
SHOW PRIVILEGES
SHOW PRIVILEGES shows the list of
system privileges that the MySQL server supports. The privileges
displayed include all static privileges, and all currently
registered dynamic privileges.
mysql> SHOW PRIVILEGES\G
*************************** 1. row ***************************
Privilege: Alter
Context: Tables
Comment: To alter the table
*************************** 2. row ***************************
Privilege: Alter routine
Context: Functions,Procedures
Comment: To alter or drop stored functions/procedures
*************************** 3. row ***************************
Privilege: Create
Context: Databases,Tables,Indexes
Comment: To create new databases and tables
*************************** 4. row ***************************
Privilege: Create routine
Context: Databases
Comment: To use CREATE FUNCTION/PROCEDURE
*************************** 5. row ***************************
Privilege: Create temporary tables
Context: Databases
Comment: To use CREATE TEMPORARY TABLE
...
Privileges belonging to a specific user are displayed by the
SHOW GRANTS statement. See
Section 13.7.7.21, “SHOW GRANTS Statement”, for more information.
SHOW PROCEDURE CODE proc_name
This statement is a MySQL extension that is available only for
servers that have been built with debugging support. It displays
a representation of the internal implementation of the named
stored procedure. A similar statement, SHOW
FUNCTION CODE, displays information about stored
functions (see Section 13.7.7.19, “SHOW FUNCTION CODE Statement”).
To use either statement, you must be the user named as the
routine DEFINER, have the
SHOW_ROUTINE privilege, or have
the SELECT privilege at the
global level.
If the named routine is available, each statement produces a
result set. Each row in the result set corresponds to one
“instruction” in the routine. The first column is
Pos, which is an ordinal number beginning
with 0. The second column is Instruction,
which contains an SQL statement (usually changed from the
original source), or a directive which has meaning only to the
stored-routine handler.
mysql>DELIMITER //mysql>CREATE PROCEDURE p1 ()BEGINDECLARE fanta INT DEFAULT 55;DROP TABLE t2;LOOPINSERT INTO t3 VALUES (fanta);END LOOP;END//Query OK, 0 rows affected (0.01 sec) mysql>SHOW PROCEDURE CODE p1//+-----+----------------------------------------+ | Pos | Instruction | +-----+----------------------------------------+ | 0 | set fanta@0 55 | | 1 | stmt 9 "DROP TABLE t2" | | 2 | stmt 5 "INSERT INTO t3 VALUES (fanta)" | | 3 | jump 2 | +-----+----------------------------------------+ 4 rows in set (0.00 sec) mysql>CREATE FUNCTION test.hello (s CHAR(20))RETURNS CHAR(50) DETERMINISTICRETURN CONCAT('Hello, ',s,'!');Query OK, 0 rows affected (0.00 sec) mysql>SHOW FUNCTION CODE test.hello;+-----+---------------------------------------+ | Pos | Instruction | +-----+---------------------------------------+ | 0 | freturn 254 concat('Hello, ',s@0,'!') | +-----+---------------------------------------+ 1 row in set (0.00 sec)
In this example, the nonexecutable BEGIN and
END statements have disappeared, and for the
DECLARE
statement,
only the executable part appears (the part where the default is
assigned). For each statement that is taken from source, there
is a code word variable_namestmt followed by a type (9
means DROP, 5 means
INSERT, and so on). The final row
contains an instruction jump 2, meaning
GOTO instruction #2.
SHOW PROCEDURE STATUS
[LIKE 'pattern' | WHERE expr]
This statement is a MySQL extension. It returns characteristics
of a stored procedure, such as the database, name, type,
creator, creation and modification dates, and character set
information. A similar statement, SHOW
FUNCTION STATUS, displays information about stored
functions (see Section 13.7.7.20, “SHOW FUNCTION STATUS Statement”).
To use either statement, you must be the user named as the
routine DEFINER, have the
SHOW_ROUTINE privilege, have the
SELECT privilege at the global
level, or have the CREATE
ROUTINE, ALTER ROUTINE,
or EXECUTE privilege granted at a
scope that includes the routine.
The LIKE clause, if present,
indicates which procedure or function names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
mysql>SHOW PROCEDURE STATUS LIKE 'sp1'\G*************************** 1. row *************************** Db: test Name: sp1 Type: PROCEDURE Definer: testuser@localhost Modified: 2018-08-08 13:54:11 Created: 2018-08-08 13:54:11 Security_type: DEFINER Comment: character_set_client: utf8mb4 collation_connection: utf8mb4_0900_ai_ci Database Collation: utf8mb4_0900_ai_ci mysql>SHOW FUNCTION STATUS LIKE 'hello'\G*************************** 1. row *************************** Db: test Name: hello Type: FUNCTION Definer: testuser@localhost Modified: 2020-03-10 11:10:03 Created: 2020-03-10 11:10:03 Security_type: DEFINER Comment: character_set_client: utf8mb4 collation_connection: utf8mb4_0900_ai_ci Database Collation: utf8mb4_0900_ai_ci
character_set_client is the session value of
the character_set_client system
variable when the routine was created.
collation_connection is the session value of
the collation_connection system
variable when the routine was created. Database
Collation is the collation of the database with which
the routine is associated.
Stored routine information is also available from the
INFORMATION_SCHEMA
PARAMETERS and
ROUTINES tables. See
Section 25.19, “The INFORMATION_SCHEMA PARAMETERS Table”, and
Section 25.29, “The INFORMATION_SCHEMA ROUTINES Table”.
SHOW [FULL] PROCESSLIST
SHOW PROCESSLIST shows which
threads are running. If you have the
PROCESS privilege, you can see
all threads. Otherwise, you can see only your own threads (that
is, threads associated with the MySQL account that you are
using). If you do not use the FULL keyword,
only the first 100 characters of each statement are shown in the
Info field.
The SHOW PROCESSLIST statement is
very useful if you get the “too many connections”
error message and want to find out what is going on. MySQL
reserves one extra connection to be used by accounts that have
the CONNECTION_ADMIN privilege
(or the deprecated SUPER
privilege), to ensure that administrators should always be able
to connect and check the system (assuming that you are not
giving this privilege to all your users).
Threads can be killed with the
KILL statement. See
Section 13.7.8.4, “KILL Statement”.
Example of SHOW PROCESSLIST
output:
mysql> SHOW FULL PROCESSLIST\G
*************************** 1. row ***************************
Id: 1
User: system user
Host:
db: NULL
Command: Connect
Time: 1030455
State: Waiting for master to send event
Info: NULL
*************************** 2. row ***************************
Id: 2
User: system user
Host:
db: NULL
Command: Connect
Time: 1004
State: Has read all relay log; waiting for the slave
I/O thread to update it
Info: NULL
*************************** 3. row ***************************
Id: 3112
User: replikator
Host: artemis:2204
db: NULL
Command: Binlog Dump
Time: 2144
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 4. row ***************************
Id: 3113
User: replikator
Host: iconnect2:45781
db: NULL
Command: Binlog Dump
Time: 2086
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 5. row ***************************
Id: 3123
User: stefan
Host: localhost
db: apollon
Command: Query
Time: 0
State: NULL
Info: SHOW FULL PROCESSLIST
5 rows in set (0.00 sec)
SHOW PROCESSLIST output has these
columns:
The connection identifier. This is the same type of value
displayed in the ID column of the
INFORMATION_SCHEMA
PROCESSLIST table, the
PROCESSLIST_ID column of the Performance
Schema threads table, and
returned by the
CONNECTION_ID() function.
The MySQL user who issued the statement. A value of
system user refers to a nonclient thread
spawned by the server to handle tasks internally. This could
be the I/O or SQL thread used on replication slaves or a
delayed-row handler. For system user,
there is no host specified in the Host
column. unauthenticated user refers to a
thread that has become associated with a client connection
but for which authentication of the client user has not yet
been done. event_scheduler refers to the
thread that monitors scheduled events (see
Section 24.4, “Using the Event Scheduler”).
A User value of system
user is distinct from the
SYSTEM_USER privilege. The
former designates internal threads. The latter
distinguishes the system user and regular user account
categories (see Section 6.2.11, “Account Categories”).
The host name of the client issuing the statement (except
for system user, for which there is no
host). The host name for TCP/IP connections is reported in
format to make it easier to determine which client is doing
what.
host_name:client_port
The default database, if one is selected; otherwise
NULL.
The type of command the thread is executing. For
descriptions for thread commands, see
Section 8.14, “Examining Thread Information”. The value of this
column corresponds to the
COM_
commands of the client/server protocol and
xxxCom_ status
variables. See Section 5.1.10, “Server Status Variables”.
xxx
The time in seconds that the thread has been in its current state. For a slave SQL thread, the value is the number of seconds between the timestamp of the last replicated event and the real time of the slave machine. See Section 17.2.2, “Replication Implementation Details”.
An action, event, or state that indicates what the thread is
doing. Descriptions for State values can
be found at Section 8.14, “Examining Thread Information”.
Most states correspond to very quick operations. If a thread stays in a given state for many seconds, there might be a problem that needs to be investigated.
For the SHOW PROCESSLIST
statement, the value of State is
NULL.
The statement the thread is executing, or
NULL if it is not executing any
statement. The statement might be the one sent to the
server, or an innermost statement if the statement executes
other statements. For example, if a CALL
statement executes a stored procedure that is executing a
SELECT statement, the
Info value shows the
SELECT statement.
Process information is also available from the
mysqladmin processlist command, the
INFORMATION_SCHEMA
PROCESSLIST table, and the
Performance Schema threads table
(see Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”,
Section 25.22, “The INFORMATION_SCHEMA PROCESSLIST Table”, and
Section 26.12.19.5, “The threads Table”). In contrast to the
INFORMATION_SCHEMA
PROCESSLIST table and
SHOW PROCESSLIST statement, which
have negative performance consequences because they require a
mutex, access to threads does not
require a mutex and has minimal impact on server performance.
The threads table also shows
information about background threads, which the
PROCESSLIST table and
SHOW PROCESSLIST do not. This
means that threads can be used to
monitor activity the other thread information sources cannot.
SHOW PROFILE [type[,type] ... ] [FOR QUERYn] [LIMITrow_count[OFFSEToffset]]type: { ALL | BLOCK IO | CONTEXT SWITCHES | CPU | IPC | MEMORY | PAGE FAULTS | SOURCE | SWAPS }
The SHOW PROFILE and
SHOW PROFILES statements display
profiling information that indicates resource usage for
statements executed during the course of the current session.
The SHOW PROFILE and
SHOW PROFILES statements are
deprecated and will be removed in a future MySQL release. Use
the Performance
Schema instead; see
Section 26.19.1, “Query Profiling Using Performance Schema”.
To control profiling, use the
profiling session variable,
which has a default value of 0 (OFF). Enable
profiling by setting profiling
to 1 or ON:
mysql> SET profiling = 1;
SHOW PROFILES displays a list of
the most recent statements sent to the server. The size of the
list is controlled by the
profiling_history_size session
variable, which has a default value of 15. The maximum value is
100. Setting the value to 0 has the practical effect of
disabling profiling.
All statements are profiled except SHOW
PROFILE and SHOW
PROFILES, so you will find neither of those statements
in the profile list. Malformed statements are profiled. For
example, SHOW PROFILING is an illegal
statement, and a syntax error occurs if you try to execute it,
but it will show up in the profiling list.
SHOW PROFILE displays detailed
information about a single statement. Without the FOR
QUERY clause, the output
pertains to the most recently executed statement. If
nFOR QUERY is
included, nSHOW PROFILE displays
information for statement n. The
values of n correspond to the
Query_ID values displayed by
SHOW PROFILES.
The LIMIT
clause may be
given to limit the output to
row_countrow_count rows. If
LIMIT is given, OFFSET
may be added to
begin the output offsetoffset rows into the
full set of rows.
By default, SHOW PROFILE displays
Status and Duration
columns. The Status values are like the
State values displayed by
SHOW PROCESSLIST, although there
might be some minor differences in interpretion for the two
statements for some status values (see
Section 8.14, “Examining Thread Information”).
Optional type values may be specified
to display specific additional types of information:
ALL displays all information
BLOCK IO displays counts for block input
and output operations
CONTEXT SWITCHES displays counts for
voluntary and involuntary context switches
CPU displays user and system CPU usage
times
IPC displays counts for messages sent and
received
MEMORY is not currently implemented
PAGE FAULTS displays counts for major and
minor page faults
SOURCE displays the names of functions
from the source code, together with the name and line number
of the file in which the function occurs
SWAPS displays swap counts
Profiling is enabled per session. When a session ends, its profiling information is lost.
mysql>SELECT @@profiling;+-------------+ | @@profiling | +-------------+ | 0 | +-------------+ 1 row in set (0.00 sec) mysql>SET profiling = 1;Query OK, 0 rows affected (0.00 sec) mysql>DROP TABLE IF EXISTS t1;Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>CREATE TABLE T1 (id INT);Query OK, 0 rows affected (0.01 sec) mysql>SHOW PROFILES;+----------+----------+--------------------------+ | Query_ID | Duration | Query | +----------+----------+--------------------------+ | 0 | 0.000088 | SET PROFILING = 1 | | 1 | 0.000136 | DROP TABLE IF EXISTS t1 | | 2 | 0.011947 | CREATE TABLE t1 (id INT) | +----------+----------+--------------------------+ 3 rows in set (0.00 sec) mysql>SHOW PROFILE;+----------------------+----------+ | Status | Duration | +----------------------+----------+ | checking permissions | 0.000040 | | creating table | 0.000056 | | After create | 0.011363 | | query end | 0.000375 | | freeing items | 0.000089 | | logging slow query | 0.000019 | | cleaning up | 0.000005 | +----------------------+----------+ 7 rows in set (0.00 sec) mysql>SHOW PROFILE FOR QUERY 1;+--------------------+----------+ | Status | Duration | +--------------------+----------+ | query end | 0.000107 | | freeing items | 0.000008 | | logging slow query | 0.000015 | | cleaning up | 0.000006 | +--------------------+----------+ 4 rows in set (0.00 sec) mysql>SHOW PROFILE CPU FOR QUERY 2;+----------------------+----------+----------+------------+ | Status | Duration | CPU_user | CPU_system | +----------------------+----------+----------+------------+ | checking permissions | 0.000040 | 0.000038 | 0.000002 | | creating table | 0.000056 | 0.000028 | 0.000028 | | After create | 0.011363 | 0.000217 | 0.001571 | | query end | 0.000375 | 0.000013 | 0.000028 | | freeing items | 0.000089 | 0.000010 | 0.000014 | | logging slow query | 0.000019 | 0.000009 | 0.000010 | | cleaning up | 0.000005 | 0.000003 | 0.000002 | +----------------------+----------+----------+------------+ 7 rows in set (0.00 sec)
Profiling is only partially functional on some architectures.
For values that depend on the getrusage()
system call, NULL is returned on systems
such as Windows that do not support the call. In addition,
profiling is per process and not per thread. This means that
activity on threads within the server other than your own may
affect the timing information that you see.
Profiling information is also available from the
INFORMATION_SCHEMA
PROFILING table. See
Section 25.23, “The INFORMATION_SCHEMA PROFILING Table”. For example, the following
queries are equivalent:
SHOW PROFILE FOR QUERY 2; SELECT STATE, FORMAT(DURATION, 6) AS DURATION FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID = 2 ORDER BY SEQ;
SHOW PROFILES
The SHOW PROFILES statement,
together with SHOW PROFILE,
displays profiling information that indicates resource usage for
statements executed during the course of the current session.
For more information, see Section 13.7.7.30, “SHOW PROFILE Statement”.
The SHOW PROFILE and
SHOW PROFILES statements are
deprecated and will be removed in a future MySQL release. Use
the Performance
Schema instead; see
Section 26.19.1, “Query Profiling Using Performance Schema”.
SHOW RELAYLOG EVENTS
[IN 'log_name']
[FROM pos]
[LIMIT [offset,] row_count]
[channel_option]
channel_option:
FOR CHANNEL channel
Shows the events in the relay log of a replication slave. If you
do not specify
', the
first relay log is displayed. This statement has no effect on
the master. log_name'SHOW RELAYLOG EVENTS requires the
REPLICATION SLAVE privilege.
The LIMIT clause has the same syntax as for
the SELECT statement. See
Section 13.2.10, “SELECT Statement”.
Issuing a SHOW RELAYLOG EVENTS
with no LIMIT clause could start a very
time- and resource-consuming process because the server
returns to the client the complete contents of the relay log
(including all statements modifying data that have been
received by the slave).
The optional FOR CHANNEL
clause enables you
to name which replication channel the statement applies to.
Providing a channelFOR CHANNEL
clause applies the
statement to a specific replication channel. If no channel is
named and no extra channels exist, the statement applies to the
default channel.
channel
When using multiple replication channels, if a
SHOW RELAYLOG EVENTS statement
does not have a channel defined using a FOR CHANNEL
clause an error is
generated. See Section 17.2.3, “Replication Channels” for more
information.
channel
SHOW RELAYLOG EVENTS displays the
following fields for each event in the relay log:
Log_name
The name of the file that is being listed.
Pos
The position at which the event occurs.
Event_type
An identifier that describes the event type.
Server_id
The server ID of the server on which the event originated.
End_log_pos
The value of End_log_pos for this event
in the master's binary log.
Info
More detailed information about the event type. The format of this information depends on the event type.
For compressed transaction payloads, the
Transaction_payload_event is first printed as
a single unit, then it is unpacked and each event inside it is
printed.
Some events relating to the setting of user and system variables
are not included in the output from SHOW
RELAYLOG EVENTS. To get complete coverage of events
within a relay log, use mysqlbinlog.
SHOW SLAVE HOSTS
Displays a list of replication slaves currently registered with
the master. SHOW SLAVE HOSTS requires the
REPLICATION SLAVE privilege.
SHOW SLAVE HOSTS should be executed on a
server that acts as a replication master. The statement displays
information about servers that are or have been connected as
replication slaves, with each row of the result corresponding to
one slave server, as shown here:
mysql> SHOW SLAVE HOSTS;
+------------+-----------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID |
+------------+-----------+------+-----------+--------------------------------------+
| 192168010 | iconnect2 | 3306 | 192168011 | 14cb6624-7f93-11e0-b2c0-c80aa9429562 |
| 1921680101 | athena | 3306 | 192168011 | 07af4990-f41f-11df-a566-7ac56fdaf645 |
+------------+-----------+------+-----------+--------------------------------------+
Server_id: The unique server ID of the
slave server, as configured in the slave server's
option file, or on the command line with
--server-id=.
value
Host: The host name of the slave server
as specified on the slave with the
--report-host option. This
can differ from the machine name as configured in the
operating system.
User: The slave server user name as,
specified on the slave with the
--report-user option.
Statement output includes this column only if the master
server is started with the
--show-slave-auth-info
option.
Password: The slave server password as,
specified on the slave with the
--report-password option.
Statement output includes this column only if the master
server is started with the
--show-slave-auth-info
option.
Port: The port on the master to which the
slave server is listening, as specified on the slave with
the --report-port option.
A zero in this column means that the slave port
(--report-port) was not set.
Master_id: The unique server ID of the
master server that the slave server is replicating from.
This is the server ID of the server on which SHOW
SLAVE HOSTS is executed, so this same value is
listed for each row in the result.
Slave_UUID: The globally unique ID of
this slave, as generated on the slave and found in the
slave's auto.cnf file.
SHOW SLAVE STATUS [FOR CHANNEL channel]
This statement provides status information on essential
parameters of the slave threads. It requires the
REPLICATION CLIENT privilege (or
the deprecated SUPER privilege).
SHOW SLAVE STATUS is nonblocking. When run
concurrently with STOP SLAVE,
SHOW SLAVE STATUS returns without waiting for
STOP SLAVE to finish shutting down the slave
SQL thread or slave I/O thread (or both). This permits use in
monitoring and other applications where getting an immediate
response from SHOW SLAVE STATUS more
important than ensuring that it returned the latest data.
If you issue this statement using the mysql
client, you can use a \G statement terminator
rather than a semicolon to obtain a more readable vertical
layout:
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: localhost
Master_User: repl
Master_Port: 13000
Connect_Retry: 60
Master_Log_File: master-bin.000002
Read_Master_Log_Pos: 1307
Relay_Log_File: slave-relay-bin.000003
Relay_Log_Pos: 1508
Relay_Master_Log_File: master-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 1307
Relay_Log_Space: 1858
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 3e11fa47-71ca-11e1-9e33-c80aa9429562
Master_Info_File:
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Reading event from the relay log
Master_Retry_Count: 10
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: 3e11fa47-71ca-11e1-9e33-c80aa9429562:1-5
Executed_Gtid_Set: 3e11fa47-71ca-11e1-9e33-c80aa9429562:1-5
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_name:
Master_TLS_Version: TLSv1.2
Master_public_key_path: public_key.pem
Get_master_public_key: 0
The Performance Schema provides tables that expose replication
information. This is similar to the information available from
the SHOW SLAVE STATUS statement,
but represented in table form. For details, see
Section 26.12.11, “Performance Schema Replication Tables”.
The following list describes the fields returned by
SHOW SLAVE STATUS. For additional
information about interpreting their meanings, see
Section 17.1.7.1, “Checking Replication Status”.
Slave_IO_State
A copy of the State field of the
SHOW PROCESSLIST output for
the slave I/O thread. This tells you what the thread is
doing: trying to connect to the master, waiting for events
from the master, reconnecting to the master, and so on. For
a listing of possible states, see
Section 8.14.4, “Replication Slave I/O Thread States”.
Master_Host
The master host that the slave is connected to.
Master_User
The user name of the account used to connect to the master.
Master_Port
The port used to connect to the master.
Connect_Retry
The number of seconds between connect retries (default 60).
This can be set with the CHANGE MASTER
TO statement.
Master_Log_File
The name of the master binary log file from which the I/O thread is currently reading.
Read_Master_Log_Pos
The position in the current master binary log file up to which the I/O thread has read.
Relay_Log_File
The name of the relay log file from which the SQL thread is currently reading and executing.
Relay_Log_Pos
The position in the current relay log file up to which the SQL thread has read and executed.
Relay_Master_Log_File
The name of the master binary log file containing the most recent event executed by the SQL thread.
Slave_IO_Running
Whether the I/O thread is started and has connected successfully to the master. Internally, the state of this thread is represented by one of the following three values:
MYSQL_SLAVE_NOT_RUN.
The slave I/O thread is not running. For this state,
Slave_IO_Running is
No.
MYSQL_SLAVE_RUN_NOT_CONNECT.
The slave I/O thread is running, but is not connected
to a replication master. For this state,
Slave_IO_Running is
Connecting.
MYSQL_SLAVE_RUN_CONNECT.
The slave I/O thread is running, and is connected to a
replication master. For this state,
Slave_IO_Running is
Yes.
Slave_SQL_Running
Whether the SQL thread is started.
Replicate_Do_DB,
Replicate_Ignore_DB
The names of any databases that were specified with the
--replicate-do-db and
--replicate-ignore-db
options, or the CHANGE REPLICATION
FILTER statement. If the FOR
CHANNEL clause was used, the channel specific
replication filters are shown. Otherwise, the replication
filters for every replication channel are shown.
Replicate_Do_Table,
Replicate_Ignore_Table,
Replicate_Wild_Do_Table,
Replicate_Wild_Ignore_Table
The names of any tables that were specified with the
--replicate-do-table,
--replicate-ignore-table,
--replicate-wild-do-table,
and
--replicate-wild-ignore-table
options, or the CHANGE REPLICATION
FILTER statement. If the FOR
CHANNEL clause was used, the channel specific
replication filters are shown. Otherwise, the replication
filters for every replication channel are shown.
Last_Errno, Last_Error
These columns are aliases for
Last_SQL_Errno and
Last_SQL_Error.
Issuing RESET MASTER or
RESET SLAVE resets the values
shown in these columns.
When the slave SQL thread receives an error, it reports
the error first, then stops the SQL thread. This means
that there is a small window of time during which
SHOW SLAVE STATUS shows a
nonzero value for Last_SQL_Errno even
though Slave_SQL_Running still displays
Yes.
Skip_Counter
The current value of the
sql_slave_skip_counter
system variable. See
SET GLOBAL sql_slave_skip_counter Statement.
Exec_Master_Log_Pos
The position in the current master binary log file to which
the SQL thread has read and executed, marking the start of
the next transaction or event to be processed. You can use
this value with the CHANGE MASTER
TO statement's
MASTER_LOG_POS option when starting a new
slave from an existing slave, so that the new slave reads
from this point. The coordinates given by
(Relay_Master_Log_File,
Exec_Master_Log_Pos) in the master's
binary log correspond to the coordinates given by
(Relay_Log_File,
Relay_Log_Pos) in the relay log.
Inconsistencies in the sequence of transactions from the
relay log which have been executed can cause this value to
be a “low-water mark”. In other words,
transactions appearing before the position are guaranteed to
have committed, but transactions after the position may have
committed or not. If these gaps need to be corrected, use
START SLAVE
UNTIL SQL_AFTER_MTS_GAPS. See
Section 17.5.1.33, “Replication and Transaction Inconsistencies”
for more information.
Relay_Log_Space
The total combined size of all existing relay log files.
Until_Condition,
Until_Log_File,
Until_Log_Pos
The values specified in the UNTIL clause
of the START SLAVE statement.
Until_Condition has these values:
None if no UNTIL
clause was specified
Master if the slave is reading until
a given position in the master's binary log
Relay if the slave is reading until a
given position in its relay log
SQL_BEFORE_GTIDS if the slave SQL
thread is processing transactions until it has reached
the first transaction whose GTID is listed in the
gtid_set.
SQL_AFTER_GTIDS if the slave threads
are processing all transactions until the last
transaction in the gtid_set has been
processed by both threads.
SQL_AFTER_MTS_GAPS
if a multithreaded slave's SQL threads are running until
no more gaps are found in the relay log.
Until_Log_File and
Until_Log_Pos indicate the log file name
and position that define the coordinates at which the SQL
thread stops executing.
For more information on UNTIL clauses,
see Section 13.4.2.5, “START SLAVE Statement”.
Master_SSL_Allowed,
Master_SSL_CA_File,
Master_SSL_CA_Path,
Master_SSL_Cert,
Master_SSL_Cipher,
Master_SSL_CRL_File,
Master_SSL_CRL_Path,
Master_SSL_Key,
Master_SSL_Verify_Server_Cert
These fields show the SSL parameters used by the slave to connect to the master, if any.
Master_SSL_Allowed has these values:
Yes if an SSL connection to the
master is permitted
No if an SSL connection to the master
is not permitted
Ignored if an SSL connection is
permitted but the slave server does not have SSL support
enabled
The values of the other SSL-related fields correspond to the
values of the MASTER_SSL_CA,
MASTER_SSL_CAPATH,
MASTER_SSL_CERT,
MASTER_SSL_CIPHER,
MASTER_SSL_CRL,
MASTER_SSL_CRLPATH,
MASTER_SSL_KEY, and
MASTER_SSL_VERIFY_SERVER_CERT options to
the CHANGE MASTER TO
statement. See Section 13.4.2.1, “CHANGE MASTER TO Statement”.
Seconds_Behind_Master
This field is an indication of how “late” the slave is:
When the slave is actively processing updates, this field shows the difference between the current timestamp on the slave and the original timestamp logged on the master for the event currently being processed on the slave.
When no event is currently being processed on the slave, this value is 0.
In essence, this field measures the time difference in
seconds between the slave SQL thread and the slave I/O
thread. If the network connection between master and slave
is fast, the slave I/O thread is very close to the master,
so this field is a good approximation of how late the slave
SQL thread is compared to the master. If the network is
slow, this is not a good approximation;
the slave SQL thread may quite often be caught up with the
slow-reading slave I/O thread, so
Seconds_Behind_Master often shows a value
of 0, even if the I/O thread is late compared to the master.
In other words, this column is useful only for
fast networks.
This time difference computation works even if the master
and slave do not have identical clock times, provided that
the difference, computed when the slave I/O thread starts,
remains constant from then on. Any changes—including
NTP updates—can lead to clock skews that can make
calculation of Seconds_Behind_Master less
reliable.
In MySQL 8.0, this field is
NULL (undefined or unknown) if the slave
SQL thread is not running, or if the SQL thread has consumed
all of the relay log and the slave I/O thread is not
running. (In older versions of MySQL, this field was
NULL if the slave SQL thread or the slave
I/O thread was not running or was not connected to the
master.) If the I/O thread is running but the relay log is
exhausted, Seconds_Behind_Master is set
to 0.
The value of Seconds_Behind_Master is
based on the timestamps stored in events, which are
preserved through replication. This means that if a master
M1 is itself a slave of M0, any event from M1's binary log
that originates from M0's binary log has M0's timestamp for
that event. This enables MySQL to replicate
TIMESTAMP successfully.
However, the problem for
Seconds_Behind_Master is that if M1 also
receives direct updates from clients, the
Seconds_Behind_Master value randomly
fluctuates because sometimes the last event from M1
originates from M0 and sometimes is the result of a direct
update on M1.
When using a multithreaded slave, you should keep in mind
that this value is based on
Exec_Master_Log_Pos, and so may not
reflect the position of the most recently committed
transaction.
Last_IO_Errno,
Last_IO_Error
The error number and error message of the most recent error
that caused the I/O thread to stop. An error number of 0 and
message of the empty string mean “no error.” If
the Last_IO_Error value is not empty, the
error values also appear in the slave's error log.
I/O error information includes a timestamp showing when the
most recent I/O thread error occurred. This timestamp uses
the format YYMMDD hh:mm:ss, and
appears in the Last_IO_Error_Timestamp
column.
Issuing RESET MASTER or
RESET SLAVE resets the values
shown in these columns.
Last_SQL_Errno,
Last_SQL_Error
The error number and error message of the most recent error
that caused the SQL thread to stop. An error number of 0 and
message of the empty string mean “no error.” If
the Last_SQL_Error value is not empty,
the error values also appear in the slave's error log.
If the slave is multithreaded, the SQL thread is the
coordinator for worker threads. In this case, the
Last_SQL_Error field shows exactly what
the Last_Error_Message column in the
Performance Schema
replication_applier_status_by_coordinator
table shows. The field value is modified to suggest that
there may be more failures in the other worker threads which
can be seen in the
replication_applier_status_by_worker
table that shows each worker thread's status. If that table
is not available, the slave error log can be used. The log
or the
replication_applier_status_by_worker
table should also be used to learn more about the failure
shown by SHOW SLAVE STATUS or
the coordinator table.
SQL error information includes a timestamp showing when the
most recent SQL thread error occurred. This timestamp uses
the format YYMMDD hh:mm:ss, and
appears in the Last_SQL_Error_Timestamp
column.
Issuing RESET MASTER or
RESET SLAVE resets the values
shown in these columns.
In MySQL 8.0, all error codes and messages
displayed in the Last_SQL_Errno and
Last_SQL_Error columns correspond to
error values listed in
Section B.3.1, “Server Error Message Reference”. This was not
always true in previous versions. (Bug #11760365, Bug
#52768)
Replicate_Ignore_Server_Ids
Any server IDs that have been specified using the
IGNORE_SERVER_IDS option of the
CHANGE MASTER TO statement,
so that the slave ignores events from these servers. This
option is used in a circular or other multi-master
replication setup when one of the servers is removed. If any
server IDs have been set in this way, a comma-delimited list
of one or more numbers is shown. If no server IDs have been
set, the field is blank.
The Ignored_server_ids value in the
slave_master_info table also shows the
server IDs to be ignored, but as a space-delimited list,
preceded by the total number of server IDs to be ignored.
For example, if a CHANGE MASTER
TO statement containing the
IGNORE_SERVER_IDS = (2,6,9) option has
been issued to tell a slave to ignore masters having the
server ID 2, 6, or 9, that information appears as shown
here:
Replicate_Ignore_Server_Ids: 2, 6, 9
Ignored_server_ids: 3, 2, 6, 9
Replicate_Ignore_Server_Ids filtering is
performed by the I/O thread, rather than by the SQL thread,
which means that events which are filtered out are not
written to the relay log. This differs from the filtering
actions taken by server options such
--replicate-do-table, which
apply to the SQL thread.
From MySQL 8.0.3, a deprecation warning is issued if
SET GTID_MODE=ON is issued when any
channel has existing server IDs set with
IGNORE_SERVER_IDS. Before starting
GTID-based replication, use
SHOW_SLAVE_STATUS to check
for and clear all ignored server ID lists on the servers
involved. You can clear a list by issuing a
CHANGE MASTER TO statement
containing the IGNORE_SERVER_IDS option
with an empty list.
Master_Server_Id
The server_id value from
the master.
Master_UUID
The server_uuid value from
the master.
Master_Info_File
The location of the master.info file,
if a file rather than a table is used for the slave's master
info repository. The use of a file for the master info log
has been superseded by crash-safe slave tables, and the
master_info_repository=FILE
setting is deprecated.
SQL_Delay
The number of seconds that the slave must lag the master.
SQL_Remaining_Delay
When Slave_SQL_Running_State is
Waiting until MASTER_DELAY seconds after master
executed event, this field contains the number of
delay seconds remaining. At other times, this field is
NULL.
Slave_SQL_Running_State
The state of the SQL thread (analogous to
Slave_IO_State). The value is identical
to the State value of the SQL thread as
displayed by SHOW
PROCESSLIST.
Section 8.14.5, “Replication Slave SQL Thread States”, provides a
listing of possible states
Master_Retry_Count
The number of times the slave can attempt to reconnect to
the master in the event of a lost connection. This value can
be set using the MASTER_RETRY_COUNT
option of the CHANGE MASTER
TO statement (preferred) or the older
--master-retry-count server
option (still supported for backward compatibility).
Master_Bind
The network interface that the slave is bound to, if any.
This is set using the MASTER_BIND option
for the CHANGE MASTER TO
statement.
Last_IO_Error_Timestamp
A timestamp in YYMMDD hh:mm:ss
format that shows when the most recent I/O error took place.
Last_SQL_Error_Timestamp
A timestamp in YYMMDD hh:mm:ss
format that shows when the most recent SQL error occurred.
Retrieved_Gtid_Set
The set of global transaction IDs corresponding to all transactions received by this slave. Empty if GTIDs are not in use. See GTID Sets for more information.
This is the set of all GTIDs that exist or have existed in
the relay logs. Each GTID is added as soon as the
Gtid_log_event is received. This can
cause partially transmitted transactions to have their GTIDs
included in the set.
When all relay logs are lost due to executing
RESET SLAVE or
CHANGE MASTER TO, or due to
the effects of the
--relay-log-recovery option,
the set is cleared. When
relay_log_purge = 1, the
newest relay log is always kept, and the set is not cleared.
Executed_Gtid_Set
The set of global transaction IDs written in the binary log.
This is the same as the value for the global
gtid_executed system
variable on this server, as well as the value for
Executed_Gtid_Set in the output of
SHOW MASTER STATUS on this
server. Empty if GTIDs are not in use. See
GTID Sets for
more information.
Auto_Position
1 if autopositioning is in use; otherwise 0.
Replicate_Rewrite_DB
The Replicate_Rewrite_DB value displays
any replication filtering rules that were specified. For
example, if the following replication filter rule was set:
CHANGE REPLICATION FILTER REPLICATE_REWRITE_DB=((db1,db2), (db3,db4));
the Replicate_Rewrite_DB value displays:
Replicate_Rewrite_DB: (db1,db2),(db3,db4)
For more information, see Section 13.4.2.2, “CHANGE REPLICATION FILTER Statement”.
Channel_name
The replication channel which is being displayed. There is always a default replication channel, and more replication channels can be added. See Section 17.2.3, “Replication Channels” for more information.
Master_TLS_Version
The TLS version used on the master. For TLS version information, see Section 6.3.2, “Encrypted Connection TLS Protocols and Ciphers”.
Master_public_key_path
The path name to a file containing a slave-side copy of the
public key required by the master for RSA key pair-based
password exchange. The file must be in PEM format. This
column applies to slaves that authenticate with the
sha256_password or
caching_sha2_password authentication
plugin.
If Master_public_key_path is given and
specifies a valid public key file, it takes precedence over
Get_master_public_key.
Get_master_public_key
Whether to request from the master the public key required
for RSA key pair-based password exchange. This column
applies to slaves that authenticate with the
caching_sha2_password authentication
plugin. For that plugin, the master does not send the public
key unless requested.
If Master_public_key_path is given and
specifies a valid public key file, it takes precedence over
Get_master_public_key.
SHOW [GLOBAL | SESSION] STATUS
[LIKE 'pattern' | WHERE expr]
SHOW STATUS provides server
status information (see
Section 5.1.10, “Server Status Variables”). This statement does
not require any privilege. It requires only the ability to
connect to the server.
Status variable information is also available from these sources:
Performance Schema tables. See Section 26.12.15, “Performance Schema Status Variable Tables”.
The mysqladmin extended-status command. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
For SHOW STATUS, a
LIKE clause, if present, indicates
which variable names to match. A WHERE clause
can be given to select rows using more general conditions, as
discussed in Section 25.50, “Extensions to SHOW Statements”.
SHOW STATUS accepts an optional
GLOBAL or SESSION variable
scope modifier:
With a GLOBAL modifier, the statement
displays the global status values. A global status variable
may represent status for some aspect of the server itself
(for example, Aborted_connects), or the
aggregated status over all connections to MySQL (for
example, Bytes_received and
Bytes_sent). If a variable has no global
value, the session value is displayed.
With a SESSION modifier, the statement
displays the status variable values for the current
connection. If a variable has no session value, the global
value is displayed. LOCAL is a synonym
for SESSION.
If no modifier is present, the default is
SESSION.
The scope for each status variable is listed at Section 5.1.10, “Server Status Variables”.
Each invocation of the SHOW
STATUS statement uses an internal temporary table and
increments the global
Created_tmp_tables value.
Partial output is shown here. The list of names and values may differ for your server. The meaning of each variable is given in Section 5.1.10, “Server Status Variables”.
mysql> SHOW STATUS;
+--------------------------+------------+
| Variable_name | Value |
+--------------------------+------------+
| Aborted_clients | 0 |
| Aborted_connects | 0 |
| Bytes_received | 155372598 |
| Bytes_sent | 1176560426 |
| Connections | 30023 |
| Created_tmp_disk_tables | 0 |
| Created_tmp_tables | 8340 |
| Created_tmp_files | 60 |
...
| Open_tables | 1 |
| Open_files | 2 |
| Open_streams | 0 |
| Opened_tables | 44600 |
| Questions | 2026873 |
...
| Table_locks_immediate | 1920382 |
| Table_locks_waited | 0 |
| Threads_cached | 0 |
| Threads_created | 30022 |
| Threads_connected | 1 |
| Threads_running | 1 |
| Uptime | 80380 |
+--------------------------+------------+
With a LIKE clause, the statement
displays only rows for those variables with names that match the
pattern:
mysql> SHOW STATUS LIKE 'Key%';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| Key_blocks_used | 14955 |
| Key_read_requests | 96854827 |
| Key_reads | 162040 |
| Key_write_requests | 7589728 |
| Key_writes | 3813196 |
+--------------------+----------+
SHOW TABLE STATUS
[{FROM | IN} db_name]
[LIKE 'pattern' | WHERE expr]
SHOW TABLE STATUS works likes
SHOW TABLES, but provides a lot
of information about each non-TEMPORARY
table. You can also get this list using the mysqlshow
--status db_name command.
The LIKE clause, if present,
indicates which table names to match. The
WHERE clause can be given to select rows
using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
This statement also displays information about views.
SHOW TABLE STATUS output has
these columns:
Name
The name of the table.
Engine
The storage engine for the table. See Chapter 15, The InnoDB Storage Engine, and Chapter 16, Alternative Storage Engines.
For partitioned tables, Engine shows the
name of the storage engine used by all partitions.
Version
This column is unused. With the removal of
.frm files in MySQL 8.0, this column
now reports a hardcoded value of 10,
which is the last .frm file version
used in MySQL 5.7.
Row_format
The row-storage format (Fixed,
Dynamic, Compressed,
Redundant, Compact).
For MyISAM tables,
Dynamic corresponds to what
myisamchk -dvv reports as
Packed.
Rows
The number of rows. Some storage engines, such as
MyISAM, store the exact count. For other
storage engines, such as InnoDB, this
value is an approximation, and may vary from the actual
value by as much as 40% to 50%. In such cases, use
SELECT COUNT(*) to obtain an accurate
count.
The Rows value is NULL
for INFORMATION_SCHEMA tables.
For InnoDB tables, the row
count is only a rough estimate used in SQL optimization.
(This is also true if the
InnoDB table is partitioned.)
Avg_row_length
The average row length.
Data_length
For MyISAM,
Data_length is the length of the data
file, in bytes.
For InnoDB,
Data_length is the approximate amount of
space allocated for the clustered index, in bytes.
Specifically, it is the clustered index size, in pages,
multiplied by the InnoDB page size.
Refer to the notes at the end of this section for information regarding other storage engines.
Max_data_length
For MyISAM,
Max_data_length is maximum length of the
data file. This is the total number of bytes of data that
can be stored in the table, given the data pointer size
used.
Unused for InnoDB.
Refer to the notes at the end of this section for information regarding other storage engines.
Index_length
For MyISAM,
Index_length is the length of the index
file, in bytes.
For InnoDB,
Index_length is the approximate amount of
space allocated for non-clustered indexes, in bytes.
Specifically, it is the sum of non-clustered index sizes, in
pages, multiplied by the InnoDB page
size.
Refer to the notes at the end of this section for information regarding other storage engines.
Data_free
The number of allocated but unused bytes.
InnoDB tables report the free space of
the tablespace to which the table belongs. For a table
located in the shared tablespace, this is the free space of
the shared tablespace. If you are using multiple tablespaces
and the table has its own tablespace, the free space is for
only that table. Free space means the number of bytes in
completely free extents minus a safety margin. Even if free
space displays as 0, it may be possible to insert rows as
long as new extents need not be allocated.
For NDB Cluster, Data_free shows the
space allocated on disk for, but not used by, a Disk Data
table or fragment on disk. (In-memory data resource usage is
reported by the Data_length column.)
For partitioned tables, this value is only an estimate and
may not be absolutely correct. A more accurate method of
obtaining this information in such cases is to query the
INFORMATION_SCHEMA
PARTITIONS table, as shown in
this example:
SELECT SUM(DATA_FREE)
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_SCHEMA = 'mydb'
AND TABLE_NAME = 'mytable';For more information, see Section 25.20, “The INFORMATION_SCHEMA PARTITIONS Table”.
Auto_increment
The next AUTO_INCREMENT value.
Create_time
When the table was created.
Update_time
When the data file was last updated. For some storage
engines, this value is NULL. For example,
InnoDB stores multiple tables in its
system
tablespace and the data file timestamp does not
apply. Even with
file-per-table
mode with each InnoDB table in a separate
.ibd file,
change
buffering can delay the write to the data file, so
the file modification time is different from the time of the
last insert, update, or delete. For
MyISAM, the data file timestamp is used;
however, on Windows the timestamp is not updated by updates,
so the value is inaccurate.
Update_time displays a timestamp value
for the last UPDATE,
INSERT, or
DELETE performed on
InnoDB tables that are not partitioned.
For MVCC, the timestamp value reflects the
COMMIT time, which is
considered the last update time. Timestamps are not
persisted when the server is restarted or when the table is
evicted from the InnoDB data dictionary
cache.
Check_time
When the table was last checked. Not all storage engines
update this time, in which case, the value is always
NULL.
For partitioned InnoDB tables,
Check_time is always
NULL.
Collation
The table default collation. The output does not explicitly list the table default character set, but the collation name begins with the character set name.
Checksum
The live checksum value, if any.
Create_options
Extra options used with CREATE
TABLE.
Create_options shows
partitioned for a partitioned table.
Prior to MySQL 8.0.16, Create_options
shows the ENCRYPTION clause specified for
tables created in file-per-table tablespaces. As of MySQL
8.0.16, it shows the encryption clause for file-per-table
tablespaces if the table is encrypted or if the specified
encryption differs from the schema encryption. The
encryption clause is not shown for tables created in general
tablespaces. To identify encrypted file-per-table and
general tablespaces, query the
INNODB_TABLESPACES
ENCRYPTION column.
When creating a table with
strict mode
disabled, the storage engine's default row format is
used if the specified row format is not supported. The
actual row format of the table is reported in the
Row_format column.
Create_options shows the row format that
was specified in the CREATE
TABLE statement.
When altering the storage engine of a table, table options
that are not applicable to the new storage engine are
retained in the table definition to enable reverting the
table with its previously defined options to the original
storage engine, if necessary.
Create_options may show retained options.
Comment
The comment used when creating the table (or information as to why MySQL could not access the table information).
For InnoDB tables,
SHOW TABLE STATUS does not
give accurate statistics except for the physical size
reserved by the table. The row count is only a rough
estimate used in SQL optimization.
For NDB tables, the output of
this statement shows appropriate values for the
Avg_row_length and
Data_length columns, with the exception
that BLOB columns are not
taken into account.
For NDB tables,
Data_length includes data stored in main
memory only; the Max_data_length and
Data_free columns apply to Disk Data.
For NDB Cluster Disk Data tables,
Max_data_length shows the space allocated
for the disk part of a Disk Data table or fragment.
(In-memory data resource usage is reported by the
Data_length column.)
For MEMORY tables, the
Data_length,
Max_data_length, and
Index_length values approximate the
actual amount of allocated memory. The allocation algorithm
reserves memory in large amounts to reduce the number of
allocation operations.
For views, most columns displayed by
SHOW TABLE STATUS are 0 or
NULL except that Name
indicates the view name, Create_time
indicates the creation time, and Comment
says VIEW.
Table information is also available from the
INFORMATION_SCHEMA
TABLES table. See
Section 25.37, “The INFORMATION_SCHEMA TABLES Table”.
SHOW [EXTENDED] [FULL] TABLES
[{FROM | IN} db_name]
[LIKE 'pattern' | WHERE expr]
SHOW TABLES lists the
non-TEMPORARY tables in a given database. You
can also get this list using the mysqlshow
db_name command. The
LIKE clause, if present, indicates
which table names to match. The WHERE clause
can be given to select rows using more general conditions, as
discussed in Section 25.50, “Extensions to SHOW Statements”.
Matching performed by the LIKE clause is
dependent on the setting of the
lower_case_table_names system
variable.
The optional EXTENDED modifier causes
SHOW TABLES to list hidden tables
created by failed ALTER TABLE
statements. These temporary tables have names beginning with
#sql and can be dropped using
DROP TABLE.
This statement also lists any views in the database. The
optional FULL modifier causes
SHOW TABLES to display a second
output column with values of BASE TABLE for a
table, VIEW for a view, or SYSTEM
VIEW for an INFORMATION_SCHEMA
table.
If you have no privileges for a base table or view, it does not
show up in the output from SHOW
TABLES or mysqlshow db_name.
Table information is also available from the
INFORMATION_SCHEMA
TABLES table. See
Section 25.37, “The INFORMATION_SCHEMA TABLES Table”.
SHOW TRIGGERS
[{FROM | IN} db_name]
[LIKE 'pattern' | WHERE expr]
SHOW TRIGGERS lists the triggers
currently defined for tables in a database (the default database
unless a FROM clause is given). This
statement returns results only for databases and tables for
which you have the TRIGGER
privilege. The LIKE clause, if
present, indicates which table names (not trigger names) to
match and causes the statement to display triggers for those
tables. The WHERE clause can be given to
select rows using more general conditions, as discussed in
Section 25.50, “Extensions to SHOW Statements”.
For the ins_sum trigger defined in
Section 24.3, “Using Triggers”, the output of
SHOW TRIGGERS is as shown here:
mysql> SHOW TRIGGERS LIKE 'acc%'\G
*************************** 1. row ***************************
Trigger: ins_sum
Event: INSERT
Table: account
Statement: SET @sum = @sum + NEW.amount
Timing: BEFORE
Created: 2018-08-08 10:10:12.61
sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,
NO_ZERO_IN_DATE,NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,
NO_ENGINE_SUBSTITUTION
Definer: me@localhost
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
Database Collation: utf8mb4_0900_ai_ci
SHOW TRIGGERS output has these
columns:
Trigger
The name of the trigger.
Event
The trigger event. This is the type of operation on the
associated table for which the trigger activates. The value
is INSERT (a row was inserted),
DELETE (a row was deleted), or
UPDATE (a row was modified).
Table
The table for which the trigger is defined.
Statement
The trigger body; that is, the statement executed when the trigger activates.
Timing
Whether the trigger activates before or after the triggering
event. The value is BEFORE or
AFTER.
Created
The date and time when the trigger was created. This is a
TIMESTAMP(2) value (with a fractional
part in hundredths of seconds) for triggers.
sql_mode
The SQL mode in effect when the trigger was created, and under which the trigger executes. For the permitted values, see Section 5.1.11, “Server SQL Modes”.
Definer
The account of the user who created the trigger, in
'
format.
user_name'@'host_name'
character_set_client
The session value of the
character_set_client system
variable when the trigger was created.
collation_connection
The session value of the
collation_connection system
variable when the trigger was created.
Database Collation
The collation of the database with which the trigger is associated.
Trigger information is also available from the
INFORMATION_SCHEMA
TRIGGERS table. See
Section 25.41, “The INFORMATION_SCHEMA TRIGGERS Table”.
SHOW [GLOBAL | SESSION] VARIABLES
[LIKE 'pattern' | WHERE expr]
SHOW VARIABLES shows the values
of MySQL system variables (see
Section 5.1.8, “Server System Variables”). This statement does
not require any privilege. It requires only the ability to
connect to the server.
System variable information is also available from these sources:
Performance Schema tables. See Section 26.12.14, “Performance Schema System Variable Tables”.
The mysqladmin variables command. See Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
For SHOW VARIABLES, a
LIKE clause, if present, indicates
which variable names to match. A WHERE clause
can be given to select rows using more general conditions, as
discussed in Section 25.50, “Extensions to SHOW Statements”.
SHOW VARIABLES accepts an
optional GLOBAL or SESSION
variable scope modifier:
With a GLOBAL modifier, the statement
displays global system variable values. These are the values
used to initialize the corresponding session variables for
new connections to MySQL. If a variable has no global value,
no value is displayed.
With a SESSION modifier, the statement
displays the system variable values that are in effect for
the current connection. If a variable has no session value,
the global value is displayed. LOCAL is a
synonym for SESSION.
If no modifier is present, the default is
SESSION.
The scope for each system variable is listed at Section 5.1.8, “Server System Variables”.
SHOW VARIABLES is subject to a
version-dependent display-width limit. For variables with very
long values that are not completely displayed, use
SELECT as a workaround. For
example:
SELECT @@GLOBAL.innodb_data_file_path;
Most system variables can be set at server startup (read-only
variables such as
version_comment are
exceptions). Many can be changed at runtime with the
SET
statement. See Section 5.1.9, “Using System Variables”, and
Section 13.7.6.1, “SET Syntax for Variable Assignment”.
Partial output is shown here. The list of names and values may differ for your server. Section 5.1.8, “Server System Variables”, describes the meaning of each variable, and Section 5.1.1, “Configuring the Server”, provides information about tuning them.
mysql> SHOW VARIABLES;
+--------------------------------------------+------------------------------+
| Variable_name | Value |
+--------------------------------------------+------------------------------+
| activate_all_roles_on_login | OFF |
| auto_generate_certs | ON |
| auto_increment_increment | 1 |
| auto_increment_offset | 1 |
| autocommit | ON |
| automatic_sp_privileges | ON |
| avoid_temporal_upgrade | OFF |
| back_log | 151 |
| basedir | /usr/ |
| big_tables | OFF |
| bind_address | * |
| binlog_cache_size | 32768 |
| binlog_checksum | CRC32 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_error_action | ABORT_SERVER |
| binlog_expire_logs_seconds | 2592000 |
| binlog_format | ROW |
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| binlog_gtid_simple_recovery | ON |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_row_metadata | MINIMAL |
| binlog_row_value_options | |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_dependency_history_size | 25000 |
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
| block_encryption_mode | aes-128-ecb |
| bulk_insert_buffer_size | 8388608 |
...
| max_allowed_packet | 67108864 |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| max_connect_errors | 100 |
| max_connections | 151 |
| max_delayed_threads | 20 |
| max_digest_length | 1024 |
| max_error_count | 1024 |
| max_execution_time | 0 |
| max_heap_table_size | 16777216 |
| max_insert_delayed_threads | 20 |
| max_join_size | 18446744073709551615 |
...
| thread_handling | one-thread-per-connection |
| thread_stack | 286720 |
| time_zone | SYSTEM |
| timestamp | 1530906638.765316 |
| tls_version | TLSv1,TLSv1.1,TLSv1.2 |
| tmp_table_size | 16777216 |
| tmpdir | /tmp |
| transaction_alloc_block_size | 8192 |
| transaction_allow_batching | OFF |
| transaction_isolation | REPEATABLE-READ |
| transaction_prealloc_size | 4096 |
| transaction_read_only | OFF |
| transaction_write_set_extraction | XXHASH64 |
| unique_checks | ON |
| updatable_views_with_limit | YES |
| version | 8.0.12 |
| version_comment | MySQL Community Server - GPL |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| version_compile_zlib | 1.2.11 |
| wait_timeout | 28800 |
| warning_count | 0 |
| windowing_use_high_precision | ON |
+--------------------------------------------+------------------------------+
With a LIKE clause, the statement
displays only rows for those variables with names that match the
pattern. To obtain the row for a specific variable, use a
LIKE clause as shown:
SHOW VARIABLES LIKE 'max_join_size'; SHOW SESSION VARIABLES LIKE 'max_join_size';
To get a list of variables whose name match a pattern, use the
% wildcard character in a
LIKE clause:
SHOW VARIABLES LIKE '%size%'; SHOW GLOBAL VARIABLES LIKE '%size%';
Wildcard characters can be used in any position within the
pattern to be matched. Strictly speaking, because
_ is a wildcard that matches any single
character, you should escape it as \_ to
match it literally. In practice, this is rarely necessary.
SHOW WARNINGS [LIMIT [offset,]row_count] SHOW COUNT(*) WARNINGS
SHOW WARNINGS is a diagnostic
statement that displays information about the conditions
(errors, warnings, and notes) resulting from executing a
statement in the current session. Warnings are generated for DML
statements such as INSERT,
UPDATE, and
LOAD DATA as well as DDL
statements such as CREATE TABLE
and ALTER TABLE.
The LIMIT clause has the same syntax as for
the SELECT statement. See
Section 13.2.10, “SELECT Statement”.
SHOW WARNINGS is also used
following EXPLAIN, to display the
extended information generated by
EXPLAIN. See
Section 8.8.3, “Extended EXPLAIN Output Format”.
SHOW WARNINGS displays
information about the conditions resulting from execution of the
most recent nondiagnostic statement in the current session. If
the most recent statement resulted in an error during parsing,
SHOW WARNINGS shows the resulting
conditions, regardless of statement type (diagnostic or
nondiagnostic).
The SHOW COUNT(*)
WARNINGS diagnostic statement displays the total
number of errors, warnings, and notes. You can also retrieve
this number from the
warning_count system variable:
SHOW COUNT(*) WARNINGS; SELECT @@warning_count;
A difference in these statements is that the first is a
diagnostic statement that does not clear the message list. The
second, because it is a SELECT
statement is considered nondiagnostic and does clear the message
list.
A related diagnostic statement, SHOW
ERRORS, shows only error conditions (it excludes
warnings and notes), and
SHOW COUNT(*)
ERRORS statement displays the total number of errors.
See Section 13.7.7.17, “SHOW ERRORS Statement”. GET
DIAGNOSTICS can be used to examine information for
individual conditions. See Section 13.6.7.3, “GET DIAGNOSTICS Statement”.
Here is a simple example that shows data-conversion warnings for
INSERT. The example assumes that
strict SQL mode is disabled. With strict mode enabled, the
warnings would become errors and terminate the
INSERT.
mysql>CREATE TABLE t1 (a TINYINT NOT NULL, b CHAR(4));Query OK, 0 rows affected (0.05 sec) mysql>INSERT INTO t1 VALUES(10,'mysql'), (NULL,'test'), (300,'xyz');Query OK, 3 rows affected, 3 warnings (0.00 sec) Records: 3 Duplicates: 0 Warnings: 3 mysql>SHOW WARNINGS\G*************************** 1. row *************************** Level: Warning Code: 1265 Message: Data truncated for column 'b' at row 1 *************************** 2. row *************************** Level: Warning Code: 1048 Message: Column 'a' cannot be null *************************** 3. row *************************** Level: Warning Code: 1264 Message: Out of range value for column 'a' at row 3 3 rows in set (0.00 sec)
The max_error_count system
variable controls the maximum number of error, warning, and note
messages for which the server stores information, and thus the
number of messages that SHOW
WARNINGS displays. To change the number of messages
the server can store, change the value of
max_error_count.
max_error_count controls only
how many messages are stored, not how many are counted. The
value of warning_count is not
limited by max_error_count,
even if the number of messages generated exceeds
max_error_count. The following
example demonstrates this. The ALTER
TABLE statement produces three warning messages
(strict SQL mode is disabled for the example to prevent an error
from occuring after a single conversion issue). Only one message
is stored and displayed because
max_error_count has been set to
1, but all three are counted (as shown by the value of
warning_count):
mysql>SHOW VARIABLES LIKE 'max_error_count';+-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_error_count | 1024 | +-----------------+-------+ 1 row in set (0.00 sec) mysql>SET max_error_count=1, sql_mode = '';Query OK, 0 rows affected (0.00 sec) mysql>ALTER TABLE t1 MODIFY b CHAR;Query OK, 3 rows affected, 3 warnings (0.00 sec) Records: 3 Duplicates: 0 Warnings: 3 mysql>SHOW WARNINGS;+---------+------+----------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------+ | Warning | 1263 | Data truncated for column 'b' at row 1 | +---------+------+----------------------------------------+ 1 row in set (0.00 sec) mysql>SELECT @@warning_count;+-----------------+ | @@warning_count | +-----------------+ | 3 | +-----------------+ 1 row in set (0.01 sec)
To disable message storage, set
max_error_count to 0. In this
case, warning_count still
indicates how many warnings occurred, but messages are not
stored and cannot be displayed.
The sql_notes system variable
controls whether note messages increment
warning_count and whether the
server stores them. By default,
sql_notes is 1, but if set to
0, notes do not increment
warning_count and the server
does not store them:
mysql>SET sql_notes = 1;mysql>DROP TABLE IF EXISTS test.no_such_table;Query OK, 0 rows affected, 1 warning (0.00 sec) mysql>SHOW WARNINGS;+-------+------+------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------+ | Note | 1051 | Unknown table 'test.no_such_table' | +-------+------+------------------------------------+ 1 row in set (0.00 sec) mysql>SET sql_notes = 0;mysql>DROP TABLE IF EXISTS test.no_such_table;Query OK, 0 rows affected (0.00 sec) mysql>SHOW WARNINGS;Empty set (0.00 sec)
The MySQL server sends to each client a count indicating the
total number of errors, warnings, and notes resulting from the
most recent statement executed by that client. From the C API,
this value can be obtained by calling
mysql_warning_count(). See
Section 28.7.6.83, “mysql_warning_count()”.
In the mysql client, you can enable and
disable automatic warnings display using the
warnings and nowarning
commands, respectively, or their shortcuts,
\W and \w (see
Section 4.5.1.2, “mysql Client Commands”). For example:
mysql>\WShow warnings enabled. mysql>SELECT 1/0;+------+ | 1/0 | +------+ | NULL | +------+ 1 row in set, 1 warning (0.03 sec) Warning (Code 1365): Division by 0 mysql>\wShow warnings disabled.
BINLOG 'str'
BINLOG is an internal-use
statement. It is generated by the mysqlbinlog
program as the printable representation of certain events in
binary log files. (See Section 4.6.8, “mysqlbinlog — Utility for Processing Binary Log Files”.) The
' value is a
base 64-encoded string the that server decodes to determine the
data change indicated by the corresponding event.
str'
To execute BINLOG statements when
applying mysqlbinlog output, a user account
requires the BINLOG_ADMIN
privilege (or the deprecated
SUPER privilege), or the
REPLICATION_APPLIER privilege
plus the appropriate privileges to execute each log event.
This statement can execute only format description events and row events.
CACHE INDEX {
tbl_index_list [, tbl_index_list] ...
| tbl_name PARTITION (partition_list)
}
IN key_cache_name
tbl_index_list:
tbl_name [{INDEX|KEY} (index_name[, index_name] ...)]
partition_list: {
partition_name[, partition_name] ...
| ALL
}
The CACHE INDEX statement assigns
table indexes to a specific key cache. It applies only to
MyISAM tables, including partitioned
MyISAM tables. After the indexes have been
assigned, they can be preloaded into the cache if desired with
LOAD INDEX INTO
CACHE.
The following statement assigns indexes from the tables
t1, t2, and
t3 to the key cache named
hot_cache:
mysql> CACHE INDEX t1, t2, t3 IN hot_cache;
+---------+--------------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+---------+--------------------+----------+----------+
| test.t1 | assign_to_keycache | status | OK |
| test.t2 | assign_to_keycache | status | OK |
| test.t3 | assign_to_keycache | status | OK |
+---------+--------------------+----------+----------+
The syntax of CACHE INDEX enables
you to specify that only particular indexes from a table should
be assigned to the cache. However, the implementation assigns
all the table's indexes to the cache, so there is no reason to
specify anything other than the table name.
The key cache referred to in a CACHE
INDEX statement can be created by setting its size
with a parameter setting statement or in the server parameter
settings. For example:
SET GLOBAL keycache1.key_buffer_size=128*1024;
Key cache parameters are accessed as members of a structured system variable. See Section 5.1.9.5, “Structured System Variables”.
A key cache must exist before you assign indexes to it, or an error occurs:
mysql> CACHE INDEX t1 IN non_existent_cache;
ERROR 1284 (HY000): Unknown key cache 'non_existent_cache'
By default, table indexes are assigned to the main (default) key cache created at the server startup. When a key cache is destroyed, all indexes assigned to it are reassigned to the default key cache.
Index assignment affects the server globally: If one client assigns an index to a given cache, this cache is used for all queries involving the index, no matter which client issues the queries.
CACHE INDEX is supported for
partitioned MyISAM tables. You can assign one
or more indexes for one, several, or all partitions to a given
key cache. For example, you can do the following:
CREATE TABLE pt (c1 INT, c2 VARCHAR(50), INDEX i(c1))
ENGINE=MyISAM
PARTITION BY HASH(c1)
PARTITIONS 4;
SET GLOBAL kc_fast.key_buffer_size = 128 * 1024;
SET GLOBAL kc_slow.key_buffer_size = 128 * 1024;
CACHE INDEX pt PARTITION (p0) IN kc_fast;
CACHE INDEX pt PARTITION (p1, p3) IN kc_slow;The previous set of statements performs the following actions:
Creates a partitioned table with 4 partitions; these
partitions are automatically named p0,
..., p3; this table has an index named
i on column c1.
Creates 2 key caches named kc_fast and
kc_slow
Assigns the index for partition p0 to the
kc_fast key cache and the index for
partitions p1 and p3
to the kc_slow key cache; the index for
the remaining partition (p2) uses the
server's default key cache.
If you wish instead to assign the indexes for all partitions in
table pt to a single key cache named
kc_all, you can use either of the following
two statements:
CACHE INDEX pt PARTITION (ALL) IN kc_all; CACHE INDEX pt IN kc_all;
The two statements just shown are equivalent, and issuing either
one has exactly the same effect. In other words, if you wish to
assign indexes for all partitions of a partitioned table to the
same key cache, the PARTITION (ALL) clause is
optional.
When assigning indexes for multiple partitions to a key cache, the partitions need not be contiguous, and you need not list their names in any particular order. Indexes for any partitions not explicitly assigned to a key cache automatically use the server default key cache.
Index preloading is also supported for partitioned
MyISAM tables. For more information, see
Section 13.7.8.5, “LOAD INDEX INTO CACHE Statement”.
FLUSH [NO_WRITE_TO_BINLOG | LOCAL] {
flush_option [, flush_option] ...
| tables_option
}
flush_option: {
BINARY LOGS
| ENGINE LOGS
| ERROR LOGS
| GENERAL LOGS
| HOSTS
| LOGS
| PRIVILEGES
| OPTIMIZER_COSTS
| RELAY LOGS [FOR CHANNEL channel]
| SLOW LOGS
| STATUS
| USER_RESOURCES
}
tables_option: {
TABLES
| TABLES tbl_name [, tbl_name] ...
| TABLES WITH READ LOCK
| TABLES tbl_name [, tbl_name] ... WITH READ LOCK
| TABLES tbl_name [, tbl_name] ... FOR EXPORT
}
The FLUSH statement has several
variant forms that clear or reload various internal caches,
flush tables, or acquire locks. To execute
FLUSH, you must have the
RELOAD privilege. Specific flush
options might require additional privileges, as described later.
It is not possible to issue
FLUSH statements within stored
functions or triggers. However, you may use
FLUSH in stored procedures, so
long as these are not called from stored functions or
triggers. See Section 24.8, “Restrictions on Stored Programs”.
By default, the server writes
FLUSH statements to the binary
log so that they replicate to replication slaves. To suppress
logging, specify the optional
NO_WRITE_TO_BINLOG keyword or its alias
LOCAL.
FLUSH LOGS,
FLUSH BINARY LOGS,
FLUSH TABLES WITH READ LOCK
(with or without a table list), and
FLUSH
TABLES are not written to the binary log in any case
because they would cause problems if replicated to a slave.
tbl_name ... FOR
EXPORT
The FLUSH statement causes an
implicit commit. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
The mysqladmin utility provides a
command-line interface to some flush operations, using commands
such as flush-hosts,
flush-logs,
flush-privileges,
flush-status, and
flush-tables. See
Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
Sending a SIGHUP or
SIGUSR1 signal to the server causes several
flush operations to occur that are similar to various forms of
the FLUSH statement. Signals can
be sent by root or the account that owns the
server process. They enable the applicable flush operations to
be performed without having to connect to the server (which for
these operations requires an account that has the
RELOAD privilege). See
Section 4.10, “Unix Signal Handling in MySQL”.
The RESET statement is similar to
FLUSH. See
Section 13.7.8.6, “RESET Statement”, for information about using the
RESET statement with replication.
The following list describes the permitted
FLUSH statement
flush_option values. For descriptions
of FLUSH
TABLES variants, see
FLUSH TABLES Syntax.
Closes and reopens any binary log file to which the server is writing. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file.
Closes and reopens any flushable logs for installed storage
engines. This causes InnoDB to flush its
logs to disk.
Closes and reopens any error log file to which the server is writing.
Closes and reopens any general query log file to which the server is writing.
Empties the host cache and the Performance Schema
host_cache table that exposes
the cache contents, and unblocks any blocked hosts. See
Section 5.1.12.3, “DNS Lookups and the Host Cache”.
Flush the host cache if some of your hosts change IP address
or if the error message Host
'
occurs for connections from legitimate hosts. (See
Section B.4.2.5, “Host 'host_name' is blocked”.) When more than
host_name' is blockedmax_connect_errors errors
occur successively for a given host while connecting to the
MySQL server, MySQL assumes that something is wrong and
blocks the host from further connection requests. Flushing
the host cache enables further connection attempts from the
host. The default value of
max_connect_errors is 100.
To avoid this error message, start the server with
max_connect_errors set to a
large value.
Closes and reopens any log file to which the server is writing. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file. If relay logging is enabled, the sequence number of the relay log file is incremented by one relative to the previous file.
FLUSH LOGS has no effect on
tables used for the general query log or for the slow query
log (see Section 5.4.1, “Selecting General Query Log and Slow Query Log Output Destinations”).
Rereads the cost model tables so that the optimizer starts using the current cost estimates stored in them. The server writes a warning to the error log for any unrecognized entries. (For information about these tables, see Section 8.9.5, “The Optimizer Cost Model”.) This operation affects only sessions that begin subsequent to the flush. Existing sessions continue to use the cost estimates that were current when they began.
Reloads the privileges from the grant tables in the
mysql system schema. As part of this
operation, the server reads the
global_grants table containing dynamic
privilege assignments and registers any unregistered
privileges found there.
If the --skip-grant-tables
option was specified at server startup to disable the MySQL
privilege system, FLUSH
PRIVILEGES provides a way to enable the privilege
system at runtime.
FLUSH PRIVILEGES resets
failed-login tracking (or enables it if the server was
started with
--skip-grant-tables) and
unlocks any temporarily locked accounts. See
Section 6.2.15, “Password Management”.
Frees memory cached by the server as a result of
GRANT,
CREATE USER,
CREATE SERVER, and
INSTALL PLUGIN statements.
This memory is not released by the corresponding
REVOKE,
DROP USER,
DROP SERVER, and
UNINSTALL PLUGIN statements,
so for a server that executes many instances of the
statements that cause caching, there will be an increase in
cached memory use unless it is freed with
FLUSH PRIVILEGES.
Clears the in-memory cache used by the
caching_sha2_password authentication
plugin. See
Cache Operation for SHA-2 Pluggable Authentication.
FLUSH
RELAY LOGS [FOR CHANNEL
channel]
Closes and reopens any relay log file to which the server is writing. If relay logging is enabled, the sequence number of the relay log file is incremented by one relative to the previous file.
The FOR CHANNEL
clause enables
you to name which replication channel the statement applies
to. Execute
channelFLUSH
RELAY LOGS FOR CHANNEL
to flush the
relay log for a specific replication channel. If no channel
is named and no extra replication channels exist, the
statement applies to the default channel. If no channel is
named and multiple replication channels exist, the statement
applies to all replication channels. For more information,
see Section 17.2.3, “Replication Channels”.
channel
Closes and reopens any slow query log file to which the server is writing.
This option adds the session status from all active sessions to the global status variables, resets the status of all active sessions, and resets account, host, and user status values aggregated from disconnected sessions. See Section 26.12.15, “Performance Schema Status Variable Tables”. This information may be of use when debugging a query. See Section 1.7, “How to Report Bugs or Problems”.
Resets all per-hour user resources to zero. This enables
clients that have reached their hourly connection, query, or
update limits to resume activity immediately.
FLUSH USER_RESOURCES does not
apply to the limit on maximum simultaneous connections that
is controlled by the
max_user_connections system
variable. See Section 6.2.20, “Setting Account Resource Limits”.
FLUSH TABLES flushes tables, and,
depending on the variant used, acquires locks. Any
TABLES variant used in a
FLUSH statement must be the only
option used. FLUSH
TABLE is a synonym for FLUSH
TABLES.
The descriptions here that indicate tables are flushed by
closing them apply differently for InnoDB,
which flushes table contents to disk but leaves them open.
This still permits table files to be copied while the tables
are open, as long as other activity does not modify them.
Closes all open tables, forces all tables in use to be closed, and flushes the prepared statement cache. For information about prepared statement caching, see Section 8.10.3, “Caching of Prepared Statements and Stored Programs”.
FLUSH TABLES is not permitted
when there is an active
LOCK TABLES ...
READ. To flush and lock tables, use
FLUSH
TABLES instead.
tbl_name ... WITH READ
LOCK
FLUSH
TABLES
tbl_name [,
tbl_name] ...
With a list of one or more comma-separated table names, this
statement is like FLUSH
TABLES with no names except that the server
flushes only the named tables. If a named table does not
exist, no error occurs.
Closes all open tables and locks all tables for all
databases with a global read lock. This is a very convenient
way to get backups if you have a file system such as Veritas
or ZFS that can take snapshots in time. Use
UNLOCK
TABLES to release the lock.
FLUSH TABLES WITH READ LOCK
acquires a global read lock rather than table locks, so it
is not subject to the same behavior as
LOCK TABLES and
UNLOCK
TABLES with respect to table locking and implicit
commits:
UNLOCK
TABLES implicitly commits any active
transaction only if any tables currently have been
locked with LOCK TABLES.
The commit does not occur for
UNLOCK
TABLES following FLUSH
TABLES WITH READ LOCK because the latter
statement does not acquire table locks.
Beginning a transaction causes table locks acquired with
LOCK TABLES to be
released, as though you had executed
UNLOCK
TABLES. Beginning a transaction does not
release a global read lock acquired with
FLUSH TABLES WITH READ
LOCK.
FLUSH TABLES WITH READ LOCK
does not prevent the server from inserting rows into the log
tables (see Section 5.4.1, “Selecting General Query Log and Slow Query Log Output Destinations”).
FLUSH
TABLES
tbl_name [,
tbl_name] ... WITH READ
LOCK
This statement flushes and acquires read locks for the named
tables. The statement first acquires exclusive metadata
locks for the tables, so it waits for transactions that have
those tables open to complete. Then the statement flushes
the tables from the table cache, reopens the tables,
acquires table locks (like
LOCK TABLES ...
READ), and downgrades the metadata locks from
exclusive to shared. After the statement acquires locks and
downgrades the metadata locks, other sessions can read but
not modify the tables.
Because this statement acquires table locks, you must have
the LOCK TABLES privilege for
each table, in addition to the
RELOAD privilege that is
required to use any FLUSH
statement.
This statement applies only to existing base
(non-TEMPORARY) tables. If a name refers
to a base table, that table is used. If it refers to a
TEMPORARY table, it is ignored. If a name
applies to a view, an
ER_WRONG_OBJECT error
occurs. Otherwise, an
ER_NO_SUCH_TABLE error
occurs.
Use UNLOCK
TABLES to release the locks,
LOCK TABLES to release the
locks and acquire other locks, or
START
TRANSACTION to release the locks and begin a new
transaction.
This FLUSH TABLES variant
enables tables to be flushed and locked in a single
operation. It provides a workaround for the restriction that
FLUSH TABLES is not permitted
when there is an active
LOCK TABLES ...
READ.
This statement does not perform an implicit
UNLOCK
TABLES, so an error results if you use the
statement while there is any active
LOCK TABLES or use it a
second time without first releasing the locks acquired.
If a flushed table was opened with
HANDLER, the handler is
implicitly flushed and loses its position.
FLUSH
TABLES
tbl_name [,
tbl_name] ... FOR
EXPORT
This FLUSH TABLES variant
applies to InnoDB tables. It ensures that
changes to the named tables have been flushed to disk so
that binary table copies can be made while the server is
running.
The statement works like this:
It acquires shared metadata locks for the named tables. The statement blocks as long as other sessions have active transactions that have modified those tables or hold table locks for them. When the locks have been acquired, the statement blocks transactions that attempt to update the tables, while permitting read-only operations to continue.
It checks whether all storage engines for the tables
support FOR EXPORT. If any do not, an
ER_ILLEGAL_HA error
occurs and the statement fails.
The statement notifies the storage engine for each table to make the table ready for export. The storage engine must ensure that any pending changes are written to disk.
The statement puts the session in lock-tables mode so
that the metadata locks acquired earlier are not
released when the FOR EXPORT
statement completes.
The
FLUSH
TABLES ... FOR EXPORT statement requires that you
have the SELECT privilege for
each table. Because this statement acquires table locks, you
must also have the LOCK
TABLES privilege for each table, in addition to
the RELOAD privilege that is
required to use any FLUSH
statement.
This statement applies only to existing base
(non-TEMPORARY) tables. If a name refers
to a base table, that table is used. If it refers to a
TEMPORARY table, it is ignored. If a name
applies to a view, an
ER_WRONG_OBJECT error
occurs. Otherwise, an
ER_NO_SUCH_TABLE error
occurs.
InnoDB supports FOR
EXPORT for tables that have their own
.ibd
file file (that is, tables created with the
innodb_file_per_table
setting enabled). InnoDB ensures when
notified by the FOR EXPORT statement that
any changes have been flushed to disk. This permits a binary
copy of table contents to be made while the FOR
EXPORT statement is in effect because the
.ibd file is transaction consistent and
can be copied while the server is running. FOR
EXPORT does not apply to InnoDB
system tablespace files, or to InnoDB
tables that have FULLTEXT indexes.
FLUSH
TABLES ...FOR EXPORT is supported for partitioned
InnoDB tables.
When notified by FOR EXPORT,
InnoDB writes to disk certain kinds of
data that is normally held in memory or in separate disk
buffers outside the tablespace files. For each table,
InnoDB also produces a file named
in the same database directory as the table. The
table_name.cfg.cfg file contains metadata needed to
reimport the tablespace files later, into the same or
different server.
When the FOR EXPORT statement completes,
InnoDB will have flushed all
dirty pages to the
table data files. Any
change buffer
entries are merged prior to flushing. At this point, the
tables are locked and quiescent: The tables are in a
transactionally consistent state on disk and you can copy
the .ibd tablespace files along with
the corresponding .cfg files to get a
consistent snapshot of those tables.
For the procedure to reimport the copied table data into a MySQL instance, see Section 15.6.1.3, “Importing InnoDB Tables”.
After you are done with the tables, use
UNLOCK
TABLES to release the locks,
LOCK TABLES to release the
locks and acquire other locks, or
START
TRANSACTION to release the locks and begin a new
transaction.
While any of these statements is in effect within the
session, attempts to use
FLUSH
TABLES ... FOR EXPORT produce an error:
FLUSH TABLES ... WITH READ LOCK FLUSH TABLES ... FOR EXPORT LOCK TABLES ... READ LOCK TABLES ... WRITE
While
FLUSH
TABLES ... FOR EXPORT is in effect within the
session, attempts to use any of these statements produce an
error:
FLUSH TABLES WITH READ LOCK FLUSH TABLES ... WITH READ LOCK FLUSH TABLES ... FOR EXPORT
KILL [CONNECTION | QUERY] processlist_id
Each connection to mysqld runs in a separate
thread. You can kill a thread with the KILL
statement.
processlist_id
Thread processlist identifiers can be determined from the
ID column of the
INFORMATION_SCHEMA
PROCESSLIST table, the
Id column of SHOW
PROCESSLIST output, and the
PROCESSLIST_ID column of the Performance
Schema threads table. The value for
the current thread is returned by the
CONNECTION_ID() function.
KILL permits an optional
CONNECTION or QUERY
modifier:
KILL
CONNECTION is the same as
KILL with no modifier: It
terminates the connection associated with the given
processlist_id, after terminating
any statement the connection is executing.
KILL QUERY
terminates the statement the connection is currently
executing, but leaves the connection itself intact.
The ability to see which threads are available to be killed
depends on the PROCESS privilege:
The ability to kill threads and statements depends on the
CONNECTION_ADMIN privilege and
the deprecated SUPER privilege:
Without CONNECTION_ADMIN or
SUPER, you can kill only your
own threads and statements.
With CONNECTION_ADMIN or
SUPER, you can kill all
threads and statements, except that to affect a thread or
statement that is executing with the
SYSTEM_USER privilege, your
own session must additionally have the
SYSTEM_USER privilege.
You can also use the mysqladmin processlist and mysqladmin kill commands to examine and kill threads.
When you use KILL, a
thread-specific kill flag is set for the thread. In most cases,
it might take some time for the thread to die because the kill
flag is checked only at specific intervals:
During SELECT operations, for
ORDER BY and GROUP BY
loops, the flag is checked after reading a block of rows. If
the kill flag is set, the statement is aborted.
ALTER TABLE operations that
make a table copy check the kill flag periodically for each
few copied rows read from the original table. If the kill
flag was set, the statement is aborted and the temporary
table is deleted.
The KILL statement returns
without waiting for confirmation, but the kill flag check
aborts the operation within a reasonably small amount of
time. Aborting the operation to perform any necessary
cleanup also takes some time.
During UPDATE or
DELETE operations, the kill
flag is checked after each block read and after each updated
or deleted row. If the kill flag is set, the statement is
aborted. If you are not using transactions, the changes are
not rolled back.
GET_LOCK() aborts and returns
NULL.
If the thread is in the table lock handler (state:
Locked), the table lock is quickly
aborted.
If the thread is waiting for free disk space in a write call, the write is aborted with a “disk full” error message.
EXPLAIN ANALYZE aborts and
prints the first row of output. This works in MySQL 8.0.20
and later.
Killing a REPAIR TABLE or
OPTIMIZE TABLE operation on a
MyISAM table results in a table that is
corrupted and unusable. Any reads or writes to such a table
fail until you optimize or repair it again (without
interruption).
LOAD INDEX INTO CACHEtbl_index_list[,tbl_index_list] ...tbl_index_list:tbl_name[PARTITION (partition_list)] [{INDEX|KEY} (index_name[,index_name] ...)] [IGNORE LEAVES]partition_list: {partition_name[,partition_name] ... | ALL }
The LOAD INDEX INTO
CACHE statement preloads a table index into the key
cache to which it has been assigned by an explicit
CACHE INDEX statement, or into
the default key cache otherwise.
LOAD INDEX INTO
CACHE applies only to MyISAM
tables, including partitioned MyISAM tables.
In addition, indexes on partitioned tables can be preloaded for
one, several, or all partitions.
The IGNORE LEAVES modifier causes only blocks
for the nonleaf nodes of the index to be preloaded.
IGNORE LEAVES is also supported for
partitioned MyISAM tables.
The following statement preloads nodes (index blocks) of indexes
for the tables t1 and t2:
mysql> LOAD INDEX INTO CACHE t1, t2 IGNORE LEAVES;
+---------+--------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+---------+--------------+----------+----------+
| test.t1 | preload_keys | status | OK |
| test.t2 | preload_keys | status | OK |
+---------+--------------+----------+----------+
This statement preloads all index blocks from
t1. It preloads only blocks for the nonleaf
nodes from t2.
The syntax of LOAD
INDEX INTO CACHE enables you to specify that only
particular indexes from a table should be preloaded. However,
the implementation preloads all the table's indexes into the
cache, so there is no reason to specify anything other than the
table name.
It is possible to preload indexes on specific partitions of
partitioned MyISAM tables. For example, of
the following 2 statements, the first preloads indexes for
partition p0 of a partitioned table
pt, while the second preloads the indexes for
partitions p1 and p3 of
the same table:
LOAD INDEX INTO CACHE pt PARTITION (p0); LOAD INDEX INTO CACHE pt PARTITION (p1, p3);
To preload the indexes for all partitions in table
pt, you can use either of the following two
statements:
LOAD INDEX INTO CACHE pt PARTITION (ALL); LOAD INDEX INTO CACHE pt;
The two statements just shown are equivalent, and issuing either
one has exactly the same effect. In other words, if you wish to
preload indexes for all partitions of a partitioned table, the
PARTITION (ALL) clause is optional.
When preloading indexes for multiple partitions, the partitions need not be contiguous, and you need not list their names in any particular order.
LOAD INDEX INTO
CACHE ... IGNORE LEAVES fails unless all indexes in a
table have the same block size. To determine index block sizes
for a table, use myisamchk -dv and check the
Blocksize column.
RESETreset_option[,reset_option] ...reset_option: { MASTER | SLAVE }
The RESET statement is used to
clear the state of various server operations. You must have the
RELOAD privilege to execute
RESET.
For information about the RESET
PERSIST statement that removes persisted global system
variables, see Section 13.7.8.7, “RESET PERSIST Statement”.
RESET acts as a stronger version
of the FLUSH statement. See
Section 13.7.8.3, “FLUSH Statement”.
The RESET statement causes an
implicit commit. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
The following list describes the permitted
RESET statement
reset_option values:
RESET MASTER
Deletes all binary logs listed in the index file, resets the binary log index file to be empty, and creates a new binary log file.
RESET SLAVE
Makes the slave forget its replication position in the master binary logs. Also resets the relay log by deleting any existing relay log files and beginning a new one.
RESET PERSIST [[IF EXISTS] system_var_name]
RESET PERSIST removes persisted
global system variable settings from the
mysqld-auto.cnf option file in the data
directory. Removing a persisted system variable causes the
variable no longer to be initialized from
mysqld-auto.cnf at server startup. For more
information about persisting system variables and the
mysqld-auto.cnf file, see
Section 5.1.9.3, “Persisted System Variables”.
The privileges required for RESET
PERSIST depend on the type of system variable to be
removed:
For dynamic system variables, this statement requires the
SYSTEM_VARIABLES_ADMIN
privilege (or the deprecated
SUPER privilege).
For read-only system variables, this statement requires the
SYSTEM_VARIABLES_ADMIN and
PERSIST_RO_VARIABLES_ADMIN
privileges.
See Section 5.1.9.1, “System Variable Privileges”.
Depending on whether the variable name and IF
EXISTS clauses are present, the
RESET PERSIST statement has these
forms:
To remove all persisted variables from
mysqld-auto.cnf, use
RESET PERSIST without naming
any system variable:
RESET PERSIST;
You must have privileges for removing both dynamic and
read-only system variables if
mysqld-auto.cnf contains both kinds of
variables.
To remove a specific persisted variable from
mysqld-auto.cnf, name it in the
statement:
RESET PERSIST system_var_name;
This includes plugin system variables, even if the plugin is not currently installed. If the variable is not present in the file, an error occurs.
To remove a specific persisted variable from
mysqld-auto.cnf, but produce a warning
rather than an error if the variable is not present in the
file, add an IF EXISTS clause to the
previous syntax:
RESET PERSIST IF EXISTS system_var_name;
RESET PERSIST is not affected by
the value of the
persisted_globals_load system
variable.
RESET PERSIST affects the
contents of the Performance Schema
persisted_variables table because
the table contents correspond to the contents of the
mysqld-auto.cnf file. On the other hand,
because RESET PERSIST does not
change variable values, it has no effect on the contents of the
Performance Schema variables_info
table until the server is restarted.
For information about RESET
statement variants that clear the state of other server
operations, see Section 13.7.8.6, “RESET Statement”.
RESTART
This statement stops and restarts the MySQL server. It requires
the SHUTDOWN privilege.
One use for RESTART is when it is
not possible or convenient to gain command-line access to the
MySQL server on the server host to restart it. For example,
SET
PERSIST_ONLY can be used at runtime to make
configuration changes to system variables that can be set only
at server startup, but the server must still be restarted for
those changes to take effect. The
RESTART statement provides a way
to do so from within client sessions, without requiring
command-line access on the server host.
After executing a RESTART
statement, the client can expect the current connection to be
lost. If auto-reconnect is enabled, the connection will be
reestablished after the server restarts. Otherwise, the
connection must be reestablished manually.
A successful RESTART operation
requires mysqld to be running in an
environment that has a monitoring process available to detect a
server shutdown performed for restart purposes:
These platforms provide the necessary monitoring support for the
RESTART statement:
Windows, when mysqld is started as a Windows service or standalone. (mysqld forks, and one process acts as a monitor to the other, which acts as the server.)
Unix and Unix-like systems that use systemd or mysqld_safe to manage mysqld.
To configure a monitoring environment such that
mysqld enables the
RESTART statement:
Set the MYSQLD_PARENT_PID environment
variable to the value of the process ID of the process that
starts mysqld, before starting
mysqld.
When mysqld performs a shutdown due to
use of the RESTART statement,
it returns exit code 16.
When the monitoring process detects an exit code of 16, it starts mysqld again. Otherwise, it exits.
Here is a minimal example as implemented in the bash shell:
#!/bin/bash
export MYSQLD_PARENT_PID=$$
export MYSQLD_RESTART_EXIT=16
while true ; do
bin/mysqld mysqld options here
if [ $? -ne $MYSQLD_RESTART_EXIT ]; then
break
fi
done
On Windows, the forking used to implement
RESTART makes determining the
server process to attach to for debugging more difficult. To
alleviate this, starting the server with
--gdb suppresses forking, in
addition to its other actions done to set up a debugging
environment. In non-debug settings,
--no-monitor may be used for the
sole purpose of suppressing forking the monitor process. For a
server started with either --gdb
or --no-monitor, executing
RESTART causes the server to
simply exit without restarting.
The
Com_restart
status variable tracks the number of
RESTART statements. Because
status variables are initialized for each server startup and do
not persist across restarts, Com_restart
normally has a value of zero, but can be nonzero if
RESTART statements were executed
but failed.
SHUTDOWN
This statement stops the MySQL server. It requires the
SHUTDOWN privilege.
SHUTDOWN provides an SQL-level
interface to the same functionality available using the
mysqladmin shutdown command or the
mysql_shutdown() C API function.
The
Com_shutdown
status variable tracks the number of
SHUTDOWN statements. Because
status variables are initialized for each server startup and do
not persist across restarts, Com_shutdown
normally has a value of zero, but can be nonzero if
SHUTDOWN statements were executed
but failed.
Another way to stop the server is to send it a
SIGTERM signal, which can be done by
root or the account that owns the server
process. SIGTERM enables server shutdown to
be performed without having to connect to the server. See
Section 4.10, “Unix Signal Handling in MySQL”.
The DESCRIBE and
EXPLAIN statements are synonyms,
used either to obtain information about table structure or query
execution plans. For more information, see
Section 13.7.7.5, “SHOW COLUMNS Statement”, and Section 13.8.2, “EXPLAIN Statement”.
{EXPLAIN | DESCRIBE | DESC}
tbl_name [col_name | wild]
{EXPLAIN | DESCRIBE | DESC}
[explain_type]
{explainable_stmt | FOR CONNECTION connection_id}
{EXPLAIN | DESCRIBE | DESC} ANALYZE [FORMAT = TREE] select_statement
explain_type: {
FORMAT = format_name
}
format_name: {
TRADITIONAL
| JSON
| TREE
}
explainable_stmt: {
SELECT statement
| TABLE statement
| DELETE statement
| INSERT statement
| REPLACE statement
| UPDATE statement
}
The DESCRIBE and
EXPLAIN statements are synonyms. In
practice, the DESCRIBE keyword is
more often used to obtain information about table structure,
whereas EXPLAIN is used to obtain a
query execution plan (that is, an explanation of how MySQL would
execute a query).
The following discussion uses the
DESCRIBE and
EXPLAIN keywords in accordance with
those uses, but the MySQL parser treats them as completely
synonymous.
DESCRIBE provides information
about the columns in a table:
mysql> DESCRIBE City;
+------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+----------+------+-----+---------+----------------+
| Id | int(11) | NO | PRI | NULL | auto_increment |
| Name | char(35) | NO | | | |
| Country | char(3) | NO | UNI | | |
| District | char(20) | YES | MUL | | |
| Population | int(11) | NO | | 0 | |
+------------+----------+------+-----+---------+----------------+
DESCRIBE is a shortcut for
SHOW COLUMNS. These statements
also display information for views. The description for
SHOW COLUMNS provides more
information about the output columns. See
Section 13.7.7.5, “SHOW COLUMNS Statement”.
By default, DESCRIBE displays
information about all columns in the table.
col_name, if given, is the name of a
column in the table. In this case, the statement displays
information only for the named column.
wild, if given, is a pattern string.
It can contain the SQL % and
_ wildcard characters. In this case, the
statement displays output only for the columns with names
matching the string. There is no need to enclose the string
within quotation marks unless it contains spaces or other
special characters.
The DESCRIBE statement is
provided for compatibility with Oracle.
The SHOW CREATE TABLE,
SHOW TABLE STATUS, and
SHOW INDEX statements also
provide information about tables. See Section 13.7.7, “SHOW Statements”.
The EXPLAIN statement provides
information about how MySQL executes statements:
EXPLAIN works with
SELECT,
DELETE,
INSERT,
REPLACE, and
UPDATE statements. In MySQL
8.0.19 and later, it also works with
TABLE statements.
When EXPLAIN is used with an
explainable statement, MySQL displays information from the
optimizer about the statement execution plan. That is, MySQL
explains how it would process the statement, including
information about how tables are joined and in which order.
For information about using
EXPLAIN to obtain execution
plan information, see Section 8.8.2, “EXPLAIN Output Format”.
When EXPLAIN is used with
FOR CONNECTION
rather
than an explainable statement, it displays the execution
plan for the statement executing in the named connection.
See Section 8.8.4, “Obtaining Execution Plan Information for a Named Connection”.
connection_id
For explainable statements,
EXPLAIN produces additional
execution plan information that can be displayed using
SHOW WARNINGS. See
Section 8.8.3, “Extended EXPLAIN Output Format”.
EXPLAIN is useful for
examining queries involving partitioned tables. See
Section 23.3.5, “Obtaining Information About Partitions”.
The FORMAT option can be used to select
the output format. TRADITIONAL presents
the output in tabular format. This is the default if no
FORMAT option is present.
JSON format displays the information in
JSON format. In MySQL 8.0.16 and later,
TREE provides tree-like output with more
precise descriptions of query handling than the
TRADITIONAL format; it is the only format
which shows hash join usage (see
Section 8.2.1.4, “Hash Join Optimization”) and is always used for
EXPLAIN ANALYZE.
EXPLAIN requires the same
privileges required to execute the explained statement.
Additionally, EXPLAIN also
requires the SHOW VIEW privilege
for any explained view.
EXPLAIN ... FOR
CONNECTION also requires the
PROCESS privilege if the
specified connection belongs to a different user.
With the help of EXPLAIN, you can
see where you should add indexes to tables so that the statement
executes faster by using indexes to find rows. You can also use
EXPLAIN to check whether the
optimizer joins the tables in an optimal order. To give a hint
to the optimizer to use a join order corresponding to the order
in which the tables are named in a
SELECT statement, begin the
statement with SELECT STRAIGHT_JOIN rather
than just SELECT. (See
Section 13.2.10, “SELECT Statement”.)
The optimizer trace may sometimes provide information
complementary to that of EXPLAIN.
However, the optimizer trace format and content are subject to
change between versions. For details, see
MySQL
Internals: Tracing the Optimizer.
If you have a problem with indexes not being used when you
believe that they should be, run ANALYZE
TABLE to update table statistics, such as cardinality
of keys, that can affect the choices the optimizer makes. See
Section 13.7.3.1, “ANALYZE TABLE Statement”.
MySQL Workbench has a Visual Explain capability that provides a
visual representation of
EXPLAIN output. See
Tutorial: Using Explain to Improve Query Performance.
MySQL 8.0.18 introduces EXPLAIN ANALYZE,
which runs a statement and produces
EXPLAIN output along with timing
and additional, iterator-based, information about how the
optimizer's expectations matched the actual execution. For
each iterator, the following information is provided:
Estimated execution cost
(Some iterators are not accounted for by the cost model, and so are not included in the estimate.)
Estimated number of returned rows
Time to return first row
Time to return all rows (actual cost), in milliseconds
(When there are multiple loops, this figure shows the average time per loop.)
Number of rows returned by the iterator
Number of loops
The query execution information is displayed using the
TREE output format, in which nodes represent
iterators. EXPLAIN ANALYZE always uses the
TREE output format. In MySQL 8.0.21 and
later, this can optionally be specified explicitly using
FORMAT=TREE; formats other than
TREE remain unsupported.
EXPLAIN ANALYZE can be used with
SELECT statements, as well as
with multi-table UPDATE and
DELETE statements. Beginning with
MySQL 8.0.19, it can also be used with
TABLE statements.
Beginning with MySQL 8.0.20, you can terminate this statement
using KILL QUERY
or CTRL-C.
EXPLAIN ANALYZE cannot be used with
FOR CONNECTION.
Example output:
mysql> EXPLAIN ANALYZE SELECT * FROM t1 JOIN t2 ON (t1.c1 = t2.c2)\G
*************************** 1. row ***************************
EXPLAIN: -> Inner hash join (t2.c2 = t1.c1) (cost=4.70 rows=6)
(actual time=0.032..0.035 rows=6 loops=1)
-> Table scan on t2 (cost=0.06 rows=6)
(actual time=0.003..0.005 rows=6 loops=1)
-> Hash
-> Table scan on t1 (cost=0.85 rows=6)
(actual time=0.018..0.022 rows=6 loops=1)
mysql> EXPLAIN ANALYZE SELECT * FROM t3 WHERE i > 8\G
*************************** 1. row ***************************
EXPLAIN: -> Filter: (t3.i > 8) (cost=1.75 rows=5)
(actual time=0.019..0.021 rows=6 loops=1)
-> Table scan on t3 (cost=1.75 rows=15)
(actual time=0.017..0.019 rows=15 loops=1)
mysql> EXPLAIN ANALYZE SELECT * FROM t3 WHERE pk > 17\G
*************************** 1. row ***************************
EXPLAIN: -> Filter: (t3.pk > 17) (cost=1.26 rows=5)
(actual time=0.013..0.016 rows=5 loops=1)
-> Index range scan on t3 using PRIMARY (cost=1.26 rows=5)
(actual time=0.012..0.014 rows=5 loops=1)
The tables used in the example output were created by the statements shown here:
CREATE TABLE t1 (
c1 INTEGER DEFAULT NULL,
c2 INTEGER DEFAULT NULL
);
CREATE TABLE t2 (
c1 INTEGER DEFAULT NULL,
c2 INTEGER DEFAULT NULL
);
CREATE TABLE t3 (
pk INTEGER NOT NULL PRIMARY KEY,
i INTEGER DEFAULT NULL
);
Values shown for actual time in the output of
this statement are expressed in milliseconds.
HELP 'search_string'
The HELP statement returns online
information from the MySQL Reference Manual. Its proper operation
requires that the help tables in the mysql
database be initialized with help topic information (see
Section 5.1.16, “Server-Side Help Support”).
The HELP statement searches the
help tables for the given search string and displays the result of
the search. The search string is not case-sensitive.
The search string can contain the wildcard characters
% and _. These have the same
meaning as for pattern-matching operations performed with the
LIKE operator. For example,
HELP 'rep%' returns a list of topics that begin
with rep.
The HELP statement understands several types of search strings:
At the most general level, use contents to
retrieve a list of the top-level help categories:
HELP 'contents'
For a list of topics in a given help category, such as
Data Types, use the category name:
HELP 'data types'
For help on a specific help topic, such as the
ASCII() function or the
CREATE TABLE statement, use the
associated keyword or keywords:
HELP 'ascii' HELP 'create table'
In other words, the search string matches a category, many topics,
or a single topic. You cannot necessarily tell in advance whether
a given search string will return a list of items or the help
information for a single help topic. However, you can tell what
kind of response HELP returned by
examining the number of rows and columns in the result set.
The following descriptions indicate the forms that the result set
can take. Output for the example statements is shown using the
familiar “tabular” or “vertical” format
that you see when using the mysql client, but
note that mysql itself reformats
HELP result sets in a different
way.
Empty result set
No match could be found for the search string.
Result set containing a single row with three columns
This means that the search string yielded a hit for the help topic. The result has three columns:
name: The topic name.
description: Descriptive help text for
the topic.
example: Usage example or examples.
This column might be blank.
Example: HELP 'replace'
Yields:
name: REPLACE
description: Syntax:
REPLACE(str,from_str,to_str)
Returns the string str with all occurrences of the string from_str
replaced by the string to_str. REPLACE() performs a case-sensitive
match when searching for from_str.
example: mysql> SELECT REPLACE('www.mysql.com', 'w', 'Ww');
-> 'WwWwWw.mysql.com'Result set containing multiple rows with two columns
This means that the search string matched many help topics. The result set indicates the help topic names:
name: The help topic name.
is_it_category: Y if
the name represents a help category, N
if it does not. If it does not, the
name value when specified as the
argument to the HELP
statement should yield a single-row result set containing
a description for the named item.
Example: HELP 'status'
Yields:
+-----------------------+----------------+ | name | is_it_category | +-----------------------+----------------+ | SHOW | N | | SHOW ENGINE | N | | SHOW MASTER STATUS | N | | SHOW PROCEDURE STATUS | N | | SHOW SLAVE STATUS | N | | SHOW STATUS | N | | SHOW TABLE STATUS | N | +-----------------------+----------------+
Result set containing multiple rows with three columns
This means the search string matches a category. The result set contains category entries:
source_category_name: The help category
name.
name: The category or topic name
is_it_category: Y if
the name represents a help category, N
if it does not. If it does not, the
name value when specified as the
argument to the HELP
statement should yield a single-row result set containing
a description for the named item.
Example: HELP 'functions'
Yields:
+----------------------+-------------------------+----------------+ | source_category_name | name | is_it_category | +----------------------+-------------------------+----------------+ | Functions | CREATE FUNCTION | N | | Functions | DROP FUNCTION | N | | Functions | Bit Functions | Y | | Functions | Comparison operators | Y | | Functions | Control flow functions | Y | | Functions | Date and Time Functions | Y | | Functions | Encryption Functions | Y | | Functions | Information Functions | Y | | Functions | Logical operators | Y | | Functions | Miscellaneous Functions | Y | | Functions | Numeric Functions | Y | | Functions | String Functions | Y | +----------------------+-------------------------+----------------+
USE db_name
The USE statement tells MySQL to
use the named database as the default (current) database for
subsequent statements. This statement requires some privilege for
the database or some object within it.
The named database remains the default until the end of the
session or another USE statement is
issued:
USE db1; SELECT COUNT(*) FROM mytable; # selects from db1.mytable USE db2; SELECT COUNT(*) FROM mytable; # selects from db2.mytable
The database name must be specified on a single line. Newlines in database names are not supported.
Making a particular database the default by means of the
USE statement does not preclude
accessing tables in other databases. The following example
accesses the author table from the
db1 database and the editor
table from the db2 database:
USE db1; SELECT author_name,editor_name FROM author,db2.editor WHERE author.editor_id = db2.editor.editor_id;