Logo
Object Management

Object Management #

SphereEx-DBPlusEngine is managed according to logical databases and object hierarchy.

Logic Database #

Concept of Logical Database #

In SphereEx-DBPlusEngine, a logical database is responsible for managing a collection of objects. This unit closely resembles the database/schema concept found in traditional databases.

Operation of logical database #

  • Create
mysql> create database mydb;
  • Delete
mysql> drop database mydb;

Note: Deleting the logical database will cause the related objects under the logical database to be deleted and can not be recovered.

  • Query
mysql> show databases;
  • Usage
mysql> use mydb;
  • Other Constraints
    • SphereEx-DBPlusEngine does not support referencing objects from other databases, such as using “select * from db2.t1”.
    • You cannot modify the name of a logical database once it has been created.
    • SphereEx-DBPlusEngine does not support setting the character set, sorting rules, read-only status, or encryption status of a logical database.

Post-step: Adding data sources #

After creating a logical database, you cannot manage tables and other objects immediately. Before that, there is a crucial step - adding data sources. Adding data sources can be understood as adding containers for subsequent objects to reside in. For operations related to data sources, you can refer to the RESOURCE section in DistSQL.

  • Operations example
  REGISTER STORAGE UNIT ds_3307 (
      HOST="127.0.0.1",
      PORT=3307,
      DB="test",
      USER="testuser",
      PASSWORD="testpwd"
  )
  • Using constraints

    SphereEx-DBPlusEngine only allows you to add homogeneous data sources to a logical database. In other words, the data sources you add must be database products that use the same communication protocol.

Object: Table #

Tables are fundamental to storing and managing data in SphereEx-DBPlusEngine. To provide external standard object functionality, the engine encapsulates the original data tables within the underlying data source. Additionally, it extends various table types to accommodate different scenarios.

Table Concept #

Name
Visibility
Meaning
Logic TableVisibleThe encapsulation of the underlying raw tables in SphereEx-DBPlusEngine is the subject of user operations and can be used by users as if they were standard database tables.
Physical TableUnisibleReal physical tables of the underlying database encapsulated by SphereEx-DBPlusEngine. Such tables are directly inaccessible to users.
Sharding TableVisibleA logical table, implemented by SphereEx-DBPlusEngine, is a set of logical objects that have the same structure of a horizontally split database (table) in a data partitioning scenario.
Broadcast TableVisibleA logical table, implemented by SphereEx-DBPlusEngine, is a logical object of a set of databases (tables) with the same structure in a data partitioning scenario. Tables within all slices have exactly the same structure and data.
Single TableVisibleA logical table, implemented by SphereEx-DBPlusEngine, is a logical object of a database (table) in a data sharding scenario. Its correspondence is stored in a data source.
Encrypt TableVisibleA logical table implemented by SphereEx-DBPlusEngine is a logical object of a database (table) in a data encryption scenario.
Archive TableVisibleA logical table implemented by SphereEx-DBPlusEngine, which manages the life cycle according to the time field and can implement deletion or archiving operations on fragments that meet the conditions. Only supports MySQL

Table operations #

Table operations are divided into two scenarios: operations on logical tables (sharding tables, encrypted tables, etc.) and operations on physical tables. For the latter, SphereEx-DBPlusEngine handles them similarly to operations in the data source. When dealing with physical tables associated with multiple data sources, it is not possible to specify a data source for the operation. It is recommended to create, operate on, and refresh metadata in the data source before performing other operations in the logical database. The following mainly explains the first scenario.

SphereEx-DBPlusEngine’s table creation involves three steps.

  • Step 1: Add Data Sources Refer to the previous steps.
  • Step 2: Create Rules Create corresponding rules based on the required functionality. The following example illustrates sharding, and other types can be referred to in the Plugin Guide.
sql> CREATE SHARDING TABLE RULE t_order(
  RESOURCES(ds_3307,ds_3308),
  SHARDING_COLUMN=order_id,
  TYPE(NAME=hash_mod,PROPERTIES("sharding-count"=4)),
  KEY_GENERATE_STRATEGY(
    COLUMN=order_id,
    TYPE(NAME=snowflake,PROPERTIES("worker-id"=123))
  )
);
  • Step three: Create the table
sql> CREATE TABLE t_order (
  order_id int NOT NULL,
  user_id int NOT NULL,
  status varchar(50) DEFAULT NULL,
  PRIMARY KEY (order_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
  • Delete

    SphereEx-DBPlusEngine is divided into two steps for table addition operation.

    • Drop table
sql> DROP TABLE IF EXISTS t_order;
  • Drop rule
sql> DROP SHARDING TABLE RULE t_order;

Note that currently SphereEx-DBPlusEngine does not provide cascade deletion capability, so deleting only tables will result in the so-called “orphan” phenomenon and may cause subsequent errors in creating objects with the same name. It is recommended to provide Console for deletion process, or periodically check if there are invalid objects that have not been deleted by using the “show sharding table rules” command.

  • Modify

    SphereEx-DBPlusEngine is divided into various cases for table modification operations:

    • Modifications of rules: Modifications of rules will change the way existing data is stored. This is done in SphereEx-DBPlusEngine by combining it with migration capabilities, which involves data migration. See Data Migration for details
    • Modifications of physical tables: Modifications of physical tables are recommended to be made against logical tables via command line or graphical means. The corresponding operation will be issued to the physical table. For tables that are not supported by SphereEx-DBPlusEngine, it can be done by directly modifying the physical table and then refreshing the metadata.
  • Query

    • Data query

      Logical tables created in SphereEx-DBPlusEngine can be queried in the normal DQL way.

    • Structure query

      Viewing metadata using DESC or dialects (for example, show create table in MySQL) is supported. However, this way you can not view details such as slicing and is limited to queries on data structures.

Other Issues #

  • Physical Table Distribution

The storage of logical tables is composed of several physical tables. There is a mapping relationship between logical and physical tables, which can be divided into two forms: uniform distribution and custom distribution. The former means that the physical tables present a uniform distribution within each data source, e.g.

db0
  ├── t_order0
  └── t_order1
db1
  ├── t_order0
  └── t_order1

The latter can be configured in a way that enables the distribution of specific rules by means of expressions.

eg.

db0
  ├── t_order0
  └── t_order1
db1
  ├── t_order2
  ├── t_order3
  └── t_order4
  • Physical Table Discovery Mechanism

SphereEx-DBPlusEngine can detect and directly use the tables that already exist in the mounted data source. Currently, the detection mechanism is manual, i.e. the metadata can be refreshed when SphereEx-DBPlusEngine is started or by executing the refresh table metadata statement. In case of physical table structure changes, it is recommended to use refresh statement to refresh metadata.

  • Same Name Table Override Problem

When a logical table has the same name as a physical table, SphereEx-DBPlusEngine will give priority to the logical table, i.e. the physical table is not recognized and used in SphereEx-DBPlusEngine at this time.

  • Broadcast Table Data Consistency

Broadcast tables are globally distributed in the SphereEx-DBPlusEngine cluster, meaning they exist in all data sources and have consistent data. By default, SphereEx-DBPlusEngine does not guarantee the consistency of broadcast table data. If users have such requirements, they can ensure it themselves by starting an XA transaction.

mysql> SHOW BROADCAST TABLE RULES;
Empty set (0.16 sec)

mysql> CREATE BROADCAST TABLE RULE t_address;
Query OK, 0 rows affected (0.05 sec)

mysql> SHOW BROADCAST TABLE RULES;
+-----------------+
| broadcast_table |
+-----------------+
| t_address       |
+-----------------+
1 row in set (0.03 sec)

mysql> COUNT BROADCAST RULE;
+-----------------+-------------+-------+
| rule_name       | database    | count |
+-----------------+-------------+-------+
| broadcast_table | sharding_db | 1     |
+-----------------+-------------+-------+
1 row in set (0.02 sec)

mysql> CREATE TABLE IF NOT EXISTS `t_address` (
    ->   `address_id` bigint(20) NOT NULL,
    ->   `address_name` varchar(100) NOT NULL,
    ->   `status` varchar(50) DEFAULT NULL,
    ->   PRIMARY KEY (`address_id`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.57 sec)

mysql> INSERT INTO t_address (address_id, address_name, status) VALUES 
    -> (1,'beijing','OK'),(2,'chengdu','OK');
Query OK, 2 rows affected (0.18 sec)

mysql> select * from t_address;
+------------+--------------+--------+
| address_id | address_name | status |
+------------+--------------+--------+
|          1 | beijing      | OK     |
|          2 | chengdu      | OK     |
+------------+--------------+--------+
2 rows in set (0.21 sec)
  • Archive Table

Archive tables currently only support MySQL, and its underlying implementation depends on the partitioning feature of MySQL. Currently only supports periodic deletion of expired data, not archiving yet.

mysql> SHOW ARCHIVE RULES;
Empty set (0.36 sec)

mysql> CREATE ARCHIVE RULE t_order (
    ARCHIVE_STRATEGY(ARCHIVE_COLUMN=createtime, ARCHIVE_ALGORITHM(TYPE(NAME="SphereEx:TIME_PARTITION", PROPERTIES("interval-unit"="MONTHS", "interval-amount"=1, "retain-amount"=12, "pre-allocate-amount"=2)))),
    SCHEDULER_ALGORITHM(TYPE(NAME="SphereEx:TIMED_TASK", PROPERTIES("schedule-cron-expr"="0 15 10 ? * *", "time-zone"="+08:00")))
);

mysql> SHOW ARCHIVE TABLE RULE t_order;
+---------+----------------+-------------------------+-------------------------------------------------------------------------------+------------------------+-------------------------+--------------------------+---------------------------------------------------+
| table   | archive_column | archive_algorithm_type  | archive_algorithm_props                                                       | storage_algorithm_type | storage_algorithm_props | scheduler_algorithm_type | scheduler_algorithm_props                         |
+---------+----------------+-------------------------+-------------------------------------------------------------------------------+------------------------+-------------------------+--------------------------+---------------------------------------------------+
| t_order | createtime     | SphereEx:TIME_PARTITION | interval-amount=1,pre-allocate-amount=2,interval-unit=MONTHS,retain-amount=12 |                        |                         | SphereEx:TIMED_TASK      | schedule-cron-expr=0 15 10 ? * *,time-zone=+08:00 |
+---------+----------------+-------------------------+-------------------------------------------------------------------------------+------------------------+-------------------------+--------------------------+---------------------------------------------------+

mysql> CREATE TABLE `t_order` (
  `order_id` bigint NOT NULL,
  `user_id` int NOT NULL,
  `status` varchar(50) NOT NULL,
  `merchant_id` int DEFAULT NULL,
  `remark` varchar(50) NOT NULL,
  `createtime` datetime NOT NULL,
  PRIMARY KEY (`order_id`,`creation_date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50500 PARTITION BY RANGE  COLUMNS(createtime)
(PARTITION p20230122 VALUES LESS THAN ('2023-01-25') ENGINE = InnoDB,
 PARTITION p20230125 VALUES LESS THAN ('2023-01-28') ENGINE = InnoDB,
 PARTITION p20230128 VALUES LESS THAN ('2023-01-31') ENGINE = InnoDB,
 PARTITION p20230131 VALUES LESS THAN ('2023-02-03') ENGINE = InnoDB,
 PARTITION p20230203 VALUES LESS THAN ('2023-02-06') ENGINE = InnoDB,
 PARTITION p20230206 VALUES LESS THAN ('2023-02-09') ENGINE = InnoDB,
 PARTITION p20230209 VALUES LESS THAN ('2023-02-12') ENGINE = InnoDB,
 PARTITION p20230212 VALUES LESS THAN ('2023-02-15') ENGINE = InnoDB,
 PARTITION p20230215 VALUES LESS THAN ('2023-02-18') ENGINE = InnoDB,
 PARTITION p20230218 VALUES LESS THAN ('2023-02-21') ENGINE = InnoDB,
 PARTITION p20230221 VALUES LESS THAN ('2023-02-24') ENGINE = InnoDB,
 PARTITION p20230224 VALUES LESS THAN ('2023-02-27') ENGINE = InnoDB,
 PARTITION p20230227 VALUES LESS THAN ('2023-03-02') ENGINE = InnoDB,
 PARTITION p20230302 VALUES LESS THAN ('2023-03-05') ENGINE = InnoDB,
 PARTITION p20230305 VALUES LESS THAN ('2023-03-08') ENGINE = InnoDB,
 PARTITION p20230308 VALUES LESS THAN ('2023-03-11') ENGINE = InnoDB) */;

Objects: Fields #

SphereEx-DBPlusEngine provides external standard object capability by encapsulating the raw data from the underlying data source. For intra-table fields, SphereEx-DBPlusEngine also does encapsulation for external services. From the user side, it is almost senseless and can be treated as database fields equally.

Field Types #

SphereEx-DBPlusEngine supports all field types of the underlying data source, with specific reference to datatype

Special Scenario: Data Encryption #

In data encryption scenarios, SphereEx-DBPlusEngine has some special handling for the fields. In terms of implementation, SphereEx-DBPlusEngine masks the differences of the underlying encrypted fields by introducing logical columns. It can be simply understood as a layer of mapping relationships that can be mapped to plaintext fields or ciphertext fields as needed. Depending on user usage requirements, SphereEx-DBPlusEngine completes the corresponding conversion process. The addition and deletion of logical columns is maintained by cryptographic rule operations, and users can not intervene manually.

Object: Constraints #

SphereEx-DBPlusEngine relies on the native database for implementing underlying data source constraints. In other words, the constraints supported by the database are still valid and enforced by SphereEx-DBPlusEngine. However, in data sharding scenarios, where data is distributed across multiple nodes, certain constraints may be limited due to data partitioning across shards. For example, referential integrity constraints (foreign key constraints) may not be supported if the referencing and referenced tables are in different shards. Similarly, unique constraints may be limited to a shard, meaning that the same value can exist in another shard without violating the constraint. To ensure data consistency across shards, SphereEx-DBPlusEngine offers additional mechanisms, such as distributed transactions and consistency checks, which can be used to supplement or replace native database constraints.

Supporting Cases #

Constraint NameConstraint MeaningSupport Status (sharding table)Support Status (non-sharding table)Note
PKPrimary KeyPartially supportedSupportSupport for primary key constraints containing split keys
FKForeign KeyUnsupportSupport
UKUKPartially supportedSupportSupport for uniqueness constraints containing shard keys
Not NULLNot NullSupportSupport
CHECKCHECKPartially supportedSupportSupport check logic without cross-slice checksum

Constraint Situation #

Some constraint-related data dictionaries are not supported at the moment, for example, INFORMATION_SCHEMA.KEY_COLUMN_USAGE under MySQL.

Object: Indexes #

SphereEx-DBPlusEngine itself does not handle much in terms of underlying data source indexes, and the indexes supported by the native database are still valid. However, in data sharding scenarios, where data is distributed across multiple nodes, some indexing capabilities are limited.

Restricted functions #

  • Unique indexing across partitions is not supported under the partitioning conditions.
  • Secondary (or global) indexes are not supported under the slice condition.

Object: Views #

SphereEx-DBPlusEngine itself does not do much to the view aspect of the underlying data source, and the views supported by the native database are still valid. However, in data sharding scenarios, where data is distributed across multiple nodes, some views are limited in functionality.

Object: Sequence or self-growing type #

SphereEx-DBPlusEngine itself does not handle the underlying data source sequence or self-growth, and the native database support is still valid. However, it is different in the data sharding scenario, which is limited by the distributed environment and difficult to support in the same way as a standalone database. SphereEx-DBPlusEngine supports multiple distributed IDs by a method called “key generator”.

Key Generator Usage #

  • Related syntax

Refers DistSQL. In addition to being created in the above way, it can also be used directly in the sharding rule definition.

  • Generation Example

The following example illustrates the usage of the lower key generator.

sql> CREATE SHARDING TABLE RULE t_order(
    ->   RESOURCES(ds_3307,ds_3308),
    ->   SHARDING_COLUMN=order_id,TYPE(NAME=hash_mod,PROPERTIES("sharding-count"=4)),
    ->   KEY_GENERATE_STRATEGY(COLUMN=order_id,TYPE(NAME=uuid,PROPERTIES("worker-id"=123)))
    -> );
Query OK, 0 rows affected (0.03 sec)

sql> CREATE TABLE t_order (
    ->   order_id varchar(50) NOT NULL,
    ->   user_id int NOT NULL,
    ->   status varchar(50) DEFAULT NULL,
    ->   PRIMARY KEY (order_id)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Query OK, 0 rows affected (0.12 sec)

sql> INSERT INTO t_order(user_id, status) VALUES
    -> (1,  'SUCCESS'),
    -> (2,  'FAIL'),
    -> (3,  'FAIL'),
    -> (4,  'FAIL'),
    -> (5,  'FAIL');
Query OK, 5 rows affected (0.01 sec)

sql> select * from t_order;
+----------------------------------+---------+---------+
| order_id                         | user_id | status  |
+----------------------------------+---------+---------+
| 48e10218c7414bdbaf640aea2c079d32 |      12 | FAIL    |
| adcb6343fb43424baed8f297832d6897 |       9 | FAIL    |
| d01fe0d8f99744949f6f734803a6fbad |       1 | SUCCESS |
| 0a52a963254f4c889924fcef27afa497 |      16 | FAIL    |
| 0bee54f2ee8a4af8bb09250f01b2565b |      15 | FAIL    |
+----------------------------------+---------+---------+

Defining and using the key generator consumes the resources of SphereEx-DBPlusEngine, which is evaluated and tested to have a performance loss of about 60%~70%, requires attention to high concurrency and intensive insertion scenarios.

Built-in Distributed ID #

UUID #

SphereEx-DBPlusEngine has a built-in UUID algorithm that generates a 36-bit random string.

  • Difference from MySQL native UUID()
    • The field types are different. If the native MySQL is inserted using BINARY type, it can not be inserted using SphereEx-DBPlusEngine and needs to be defined as VARCHAR to do so. Here you need to pay attention to the modification cost (data type needs to be changed) during the migration of the system for future users.
    • Unbalanced data distribution. Using UUID data distribution under HASH distribution policy is unbalanced.

NanoID #

SphereEx-DBPlusEngine has a built-in NanoID algorithm, which is smaller, more secure and faster than UUID.

Snowflake #

SphereEx-DBPlusEngine has a built-in Snowflake algorithm, which is smaller and more orderly than UUID.

  • Principle

    In the same process, it is firstly guaranteed not to duplicate by time bits, and if the time is the same then it is guaranteed by sequence bits. Also, since the time bits are monotonically increasing and the individual servers are largely time synchronized, the generated primary keys can be considered generally ordered in a distributed environment, which ensures efficient insertion into index fields. An example is the primary key for MySQL’s Innodb storage engine.

Snowflake principle

  • Format

The binary representation of the primary key generated using the snowflake algorithm consists of 4 parts, listed from high to low: 1bit sign bit, 41bit timestamp bit, 10bit work process bit, and 12bit sequence number bit.

- Symbol bits(1bit)

The reserved sign bit, constant to zero.。

- Time Stamp bit(41bit)

The number of milliseconds that a 41-bit timestamp can hold is 2 to the 41st power, and the number of milliseconds used in a year is: 365 * 24 * 60 * 60 * 1000. The calculation shows: Math.pow(2, 41) / (365 * 24 * 60 * 60 * 1000L);

The result equals approximately 69.73 years. The time epoch of SphereEx-DBPlusEngine’s snowflake algorithm starts at zero on November 1, 2016 and can be used until 2086, which is believed to meet the requirements of the vast majority of systems.

- Work process bit(10bit)

This flag is unique within the Java process and should be guaranteed to be a different id for each worker process if the deployment is a distributed application. The value defaults to 0 and can be set via the properties.

- Serial number bits(12bit)

This sequence is used to generate different IDs in the same millisecond. if more than 4096 (2 to the 12th power) are generated in this millisecond, then the generator waits until the next millisecond to continue generation.
  • Clock redirection

    Server clock redirection can cause duplicate sequences to be generated, so the default distributed primary key generator provides a maximum tolerable number of clock redirection milliseconds. If the clock redial exceeds the maximum tolerable number of milliseconds threshold, the program reports an error; if it is within the tolerable range, the default distributed primary key generator waits for the clock to synchronize to the time of the last primary key generator before continuing to work. The default value for the maximum tolerated clock dialback milliseconds is 0, which can be set via the properties.

  • Restrictions

    Using the Snowflake algorithm, the SphereEx-DBPlusEngine cluster size can not exceed 1024 compute nodes.

Objects: In-Database Calculations #

For built-in calculation objects of database scenarios, such as stored procedures, triggers, functions, etc., SphereEx-DBPlusEngine provides limited support. For raw database internal objects, SphereEx-DBPlusEngine can support them by direct pass-through downlink. For in scenarios such as data sharding, read/write separation, etc., the above objects are not supported.

DDL Consistency #

For partial data fragmentation scenarios, SphereEx-DBPlusEngine provides DDL consistency guarantee. That is, during the execution process, either all fragments complete this DDL operation or all fragments cancel this DDL operation, and there will be no data structure inconsistency. Its implementation principle is to provide reverse DDL SQL to complete the rollback operation when the execution process is abnormal. For some operations, SphereEx-DBPlusEngine does not provide the rollback capability or requires manual intervention by the user to retry to complete.

In-progress operation impact #

TypeCaluseIf support RollbackImpact to DDLImpact to DQLImpact to DMLImpact to TCLImpact to DistSQLVisible
DATABASECREATESupportCan’t ExecuteCan’t ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
DATABASEDROPUnsupportedCan’t ExecuteCan’t ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
SCHEMACREATESupportCan’t ExecuteCan’t ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
SCHEMADROPUnsupportedCan’t ExecuteCan’t ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
SCHEMAALTERSupportCan’t ExecuteCan ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLECREATESupportCan’t ExecuteCan Partly ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEDROPUnsupported, need retryCan’t ExecuteCan’t ExecuteCan’t ExecuteCan’t ExecuteCan’t ExecuteInvisible
TABLEALTER (CREATE/ADD INDEX)SupportCan’t ExecuteCan ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (DROP INDEX)Unsupported, need retryCan’t ExecuteCan ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (RENAME INDEX)SupportCan’t ExecuteCan ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (ADD COLUMN)SupportCan’t ExecuteCan Partly ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (DROP COLUMN)Unsupported, need retryCan’t ExecuteCan Partly ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (MODIFY COLUMN)Unsupported, need retryCan’t ExecuteCan ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (RENAME COLUMN)SupportCan’t ExecuteCan Partly ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible
TABLEALTER (CHANG COLUMN)Unsupported, need retryCan’t ExecuteCan Partly ExecuteCan’t ExecuteCan ExecuteCan’t ExecuteInvisible

TABLE Include sharding table, broadcast table and single table.

Operation Examples (Normal) #

-- Enable consistency
sql> SET DIST VARIABLE meta_data_consistency_enabled=true;

-- Ensure consistency is enabled
sql> SHOW DIST VARIABLE WHERE NAME = meta_data_consistency_enabled;

-- Execute DDL
sql> ALTER TABLE ...

-- View DDL jobs
sql> SHOW DDL LIST;

-- After normal execution, commit the job
sql> COMMIT DDL '<id>';

-- In case of abnormal execution, rollback the job
sql> ROLLBACK DDL '<id>';

-- If execution is interrupted due to a fault, you can choose to manually repair and retry the job
sql> START DDL '<id>';

Operation Example (Abnormal) #

sql> create table t_order (order_id INT NOT NULL, user_id INT NOT NULL, status VARCHAR(45) NULL,  `test_column` bigint(20) NOT NULL DEFAULT '0',  `test_column2` bigint(20) NOT NULL DEFAULT '0');

-- Bug Mock-up

sql> alter table `t_order` modify column `test_column` varchar(10), modify column `test_column2` varchar(10);
-- ERROR 30000 (HY000): Unknown exception: Table 'demo_ds_1.t_order_0' doesn't exist, 
-- please use SHOW DDL STATUS 'j5601839ceed1d09d73d9d1f7e7f77de75228' to check status.
-- At this point there will be a storage node ds_0 on which the execution succeeds and ds_1 on which the execution fails. Since rollback is not supported, it will trigger a job retry operation


mysql> show ddl list;
+---------------------------------------+---------+--------------+----------------+--------+---------------------+---------------------+
| id                                    | tables  | ddl_type     | job_item_count | active | create_time         | stop_time           |
+---------------------------------------+---------+--------------+----------------+--------+---------------------+---------------------+
| j5601839ceed1d09d73d9d1f7e7f77de75228 | t_order | ALTER_TABLE  | 1              | false  | 2023-02-19 16:27:

-- Fix bug

-- retry "job" on Proxy
sql> start ddl 'j5601839ceed1d09d73d9d1f7e7f77de75228';

-- Proxy view "job" execution completed and automatically deleted
mysql> show ddl list;
Empty set (0.02 sec)

Exception Description #

When performing DDL operations on distributed objects, in extreme cases, metadata inconsistencies may be seen in different compute nodes. In this case, you can manually refresh the metadata by executing REFRESH DATABASE METADATA db_name FROM GOVERNANCE CENTER; command in the corresponding compute node. Refer to DistSQL for details.

Online DDL #

SphereEx-DBPlusEngine supports online execution of DDL for MySQL databases. The implementation mechanism varies depending on the different versions, as described below. When using Online DDL to execute DDL, you can start the execution directly, and there is no need to use the START DDL operation.

props:
  proxy-ddl-strategy: ONLINE
  # Options: ONLINE, DIRECT, AUTO
  # ONLINE: The proxy intervenes and ensures that the DDL is executed online. If the user specifies the 'instant' parameter and the underlying database is MySQL 8.0, the 'instant' method will be used for execution. If the user does not specify the 'instant' parameter and the underlying database is MySQL 8.0, the 'instant' method will be used for execution. If it is not MySQL 8.0, the data will be copied online for execution.
  # DIRECT: The proxy intervenes for consistency but directly sends the statement to the database for DDL execution.
  # AUTO: The proxy determines whether to execute online, instantly, or copy data online based on user configuration.
  ghost-ddl-row-threshold: 1000000 # Row count threshold to trigger table replication DDL
  ghost-ddl-auto-cutover: true # Automatic switching mode. If disabled, you need to manually execute COMMIT DDL to complete the switch.
  ghost-ddl-drop-table: true # Automatically delete the original table after DDL is completed

You can also specify this through Hint in the statement. The configuration specified in Hint takes precedence over the configuration in props.

/* SHARDINGSPHERE_HINT: PROXY_DDL_STRATEGY=AUTO, GHOST_DDL_ROW_THRESHOLD=1000000, GHOST_DDL_AUTO_CUTOVER=true, GHOST_DDL_DROP_TABLE=true */
ALTER TABLE t MODIFY COLUMN target_col BIGINT;

Constraint Situation #

This is an experimental feature, it’s not recommended to be used on production environment.