Logo
Configuration Files

Configuration Files #

List of cluster ecosystem tool profiles #

SphereEx-Boot

Configuration file nameLocationDescription
cluster-template.yamlCurrent operation directoryBoot installation cluster
console_install.yamlCurrent operation directoryBoot installation Console configuration file

SphereEx-Console

Configuration file nameLocationDescription
application.ymlconf directoryThe software runs internal files, the contents of which are not recommended for modification
application-prod.ymlconf directoryRun the parameter file and can modify
  • application.yml configuration description
#profiles
spring:
  profiles:
    active: prod # Fixed values cannot be modified for 
  servlets:
    multipart:
      enabled: true # cannot be modified 
      max-file-size: 10MB # Maximum single file size 
      max-request-size: 10MB # Total single request file size 
# mybatis-plus configuration cannot be modified 
mybatis-plus:
  mapper-locations: classpath:com/sphereex/console/mapper/*.xml
  configuration:
    log-impl: org.apache.ibatis.logging.slf4j.Slf4jImpl
  global-config:
    db-config:
      logic-delete-field: deleted
      logic-delete-value: 1
      logic-not-delete-value: 0
      id-type: auto
      where-strategy: not_empty

# url whitelist cannot be modified for 
secure:
  ignored:
    urls[0]: /
    urls[1]: //*.js
    urls[2]: //*.css
    urls[3]: //*.png
    urls[4]: //*.ico
    urls[5]: //*.html
    urls[6]: /error
    urls[7]: /static/
    urls[8]: /api/login
    urls[9]: /api/logout
    urls[10]: /api/**/download
    urls[11]: /api/user/reset
    urls[12]: /api/monitor/config/reset

# zookeeper default configuration cannot be modified for 
sphereex:
  install:
    zookeeper:
      tick-time: 2000
      init-limit: 10
      sync-limit: 9
      communication-port: 2888
      election-port: 3888
      
# Thread pool configuration cannot be modified by command:
thread:
  pool:
    queue-capacity: 100 # queue length 
    core-pool-size: 4 # num of core threads
    max-pool-size: 8 # maximum threads 
    keep-alive-seconds: 600 # idle threads alive
  • application-prod.yml configuration description
server:
  port: 8088 # Start port  can be modified

software:
  home: /opt/software # Software installation package directory can be modified

spring:
  datasource:
    url: jdbc:mysql://127.0.0.1:3306/console?serverTimezone=UTC&useSSL=false&allowPublicKeyRetrieval=true # Database connection address modifiable 
    username: root # Database user, modifiable
    password: 123456 # Database password modifiable
    driverClassName: com.mysql.jdbc.Driver #Database driver fixed value not modifiable

jwt:
  header: token # token attribute name not modifiable
  secret: your_secret # token signature key (string) modifiable
  expiration: 30000000000 # token expiration time modifiable

# console connection engine thread pool configuration not recommended
sphereex:
  proxy:
    datasource:
      hikari:
        maximum-pool-size: 4 #The maximum number of connections that can be kept in the connection 
        poolconnection-timeout: 3000 # The maximum time to wait for a connection from the connection pool, in milliseconds
        minimum-idle: 2 # The minimum number of idle 
        connectionsidle-timeout: 500000 # The maximum time a connection can be idle in the pool, in milliseconds,
        max-lifetime: 540000 # The maximum time a connection can live, in milliseconds
  • cluster-template.yaml
When deploying a cluster via SphereEx-Boot, you need to provide a cluster topology configuration file in yaml format with the following configuration data.
cluster_name: the name of the cluster
install_user: login deployer user name
install_password: password of the login deployer user
proxy: ShardingSphere-Proxy configuration version: ShardingSphere-Proxy version identifier
file: path to the master ShardingSphere-Proxy installation package file
conf_dir: host ShardingSphere-Proxy business configuration files directory
depend_files: path to the ShardingSphere-Proxy driver jar package file on the master
install_dir: deployment directory of the ShardingSphere-Proxy deployment machine
port: Deployer ShardingSphere-Proxy startup port
overwrite: if the deployment machine installation directory already exists, whether to reinstall it. Default: true.
servers: list of deployment machine information host: IP address of the deployment machine
port: Deployer ShardingSphere-Proxy startup port. (not necessary, if not configured, the configuration in the proxy takes precedence)
install_dir: Deployer ShardingSphere-Proxy installation directory (not necessary, if not configured in the proxy)
agent_conf_file: path to the agent configuration file in the master ShardingSphere-Proxy (not necessary, if not configured, the configuration in the proxy prevails)
overwrite: if the deployment machine installation directory already exists, whether to reinstall, default true (not necessary, configured in the proxy when not configured)
zookeeper: ZooKeeper configuration (can be left out if ZooKeeper is not needed)
version: ZooKeeper version identifier
file: path to the master ZooKeeper installation file
conf_file: path to the master ZooKeeper zoo.cfg configuration file
install_dir: deployment machine Zookeeper installation directory
data_dir: the dataDir configuration value in the deployment machine's ZooKeeper configuration file zoo.cfg
port: the deployer ZooKeeper startup port
overwrite: if the deployment machine installation directory already exists, whether to reinstall it, default true.
servers: list of ZooKeeper deployments host: IP address of the deployer
myid: the myid value of the ZooKeeper cluster
port: deployment machine ZooKeeper startup port (not necessary, if not configured, the configuration in ZooKeeper prevails)
install_dir: deployment machine ZooKeeper installation directory (not necessary, configured in ZooKeeper if not configured)
conf_file: path to the Zookeeper zoo.cfg configuration file on the master machine (not necessary, if not configured, the configuration in ZooKeeper prevails)
data_dir: the value of dataDir in the deployment machine's zoo.cfg configuration file (not necessary, if not configured, the configuration in ZooKeeper prevails)
overwrite: if the deployment machine installation directory already exists, whether to reinstall it, default true (not necessary, configured in ZooKeeper when not configured)

Cluster component profile list #

Component typeComponent nameConfiguration file nameLocationDescription
Storage NodesStorage NodesStorage NodesNo management
Governance CenterZookeeperzoo.cfgzookeeper installation directory/confSphereEx-Console Installation of the log center configuration file
Monitoring CenterPrometheusprometheus.ymlInstallation directorySphereEx-Console Installation of the monitoring center configuration file
Monitoring Pluginmysql_exportermysql_exporter_conf.cnfInstallation directorySphereEx-Console Installation of SQL monitoring plugin configuration file
Zookeeper_exporterzoo.cfgzookeeper's installation directory/confSphereEx-Console Installation of Zookeeper monitoring plugin configuration file
Log CenterElasticsearchelasticsearch.ymlInstallation directory /configSphereEx-Console Installation of the log center configuration file
Logstashlogstash.confInstallation directory /configSphereEx-Console Installation of the log center configuration file
Log PluginFilebeatfilebeat.ymlInstallation directory /configSphereEx-Console Installing the compute node log plugin configuration file
  • zoo.cfg
tickTime={{zoo_tick_time}}
initLimit={{zoo_init_limit}}
syncLimit={{zoo_sync_limit}}
dataLogDir={{zoo_data_dir}}  # Data log path
dataDir={{zoo_data_dir}}  # Data path
clientPort={{zoo_client_port}}
autopurge.snapRetainCount=500
autopurge.purgeInterval=24
4lw.commands.whitelist=*  # Allow commands
admin.enableServer=false
{{zoo_server}}   # server.1=192.168.1.148:2888:3888 Other services in the cluster
  • prometheus.yml
global:
  scrape_interval:     15s
  evaluation_interval: 15s
scrape_configs:
- job_name: prometheus  # Self-monitoring Subsequent additions and changes to monitoring will only change the configuration under this field
  static_configs:
  - targets:
    - 10.211.55.3:9090
  • mysql_exporter
"[client]
host={host}  # Monitoring database host
port={port}  # Monitoring database port
user={user}  # Monitoring database user
password={password}"  # Monitoring database password

Zookeeper_exporter zoo.cfg

4lw.commands.whitelist=*  # Will be added if monitoring is not enabled in the target file Already there will not be added
  • Elasticsearch.yml
"cluster.name" # Cluster Name
"node.name"# Node Name
"node.master" # Mast Node
"node.data" # Data Node
"network.host"  # host
"http.port"  # Listening port
"transport.tcp.port"  # cluster communication port
"discovery.seed_hosts"  # cluster communication list
"path.data"  # data directory
"path.logs"  # log directory
  • logstash.conf
input {
  beats {
    host => "{{input_host}}"  # host of filebeat
    port => {{input_port}}  # port of filebeat
  } 
}

filter {
    if [log_type] == "general" {
        if [message] =~ "GENERAL-QUERY"  {
            grok {
                match => {
                    "message" => "^\[(?<log_level>.+)\]\s(?<timestamp>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3})\s\[(?<thread>.+)\]\s?(?<logger>.+)\s-\sdb:\s(?<db>.+)\suser:\s(?<user>.+)\shost:\s(?<host>.+)\squery_time:\s(?<query_time>\d+)\ssql_type:\s(?<sql_type>.+)\n(?<sql>(.|\r|\n)*)"
                }
            }
            mutate { convert => {"query_time" => "integer" } }
        } else  {
            grok {
                match => {
                    "message" => "^\[(?<log_level>.+)\]\s(?<timestamp>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3})\s\[(?<thread>.+)\]\s(?<logger>.+)\s-\s(?<msg>.*)"
                }
            }
        }
        date { match => ["timestamp","yyyy-MM-dd HH:mm:ss.SSS"] timezone => "Asia/Shanghai" target => "@timestamp"}
        mutate {
            add_field => { "[@metadata][log_type]" => "general" }
            remove_field => ["@version", "tags","log_type"]
            strip => ["log_lelev"]
            convert => {"query_time" => "integer"}
        }
    } else if [log_type] == "slow" {
         grok {
            match => { "message" => "^timestamp:\s(?<timestamp>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3})\sdb:\s(?<db>.+)\suser:\s(?<user>.+)\shost:\s(?<host>.+)\squery_time:\s(?<query_time>\d+)\ssql_type:\s(?<sql_type>.+)\n(?<sql>(.|\r|\n)*)" }
         }
         date { match => ["timestamp","yyyy-MM-dd HH:mm:ss.SSS"] timezone => "Asia/Shanghai" target => "@timestamp"}
         mutate {
            add_field => { "[@metadata][log_type]" => "slow" }
            remove_field => ["@version", "tags","log_type"]
            convert => {"query_time" => "integer"}
         }
    } else {
       drop { }
    }
}

output {
    elasticsearch {
        hosts => {{output}}  # elasticsearch host
        index => "cluster-%{[@metadata][log_type]}@%{cluster_id}-%{+YYYY.MM.dd}"
  }
}
  • filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - {{general_log_path}}  # Run log path
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
  fields_under_root: true
  fields:
    cluster_id: {{cluster_id}}  # Cluster id 
    cluster_name: {{cluster_name}}  # Cluster name
    node_id: {{node_id}}  # Node id 
    node_name: {{node_name}}  # Node name
    log_type: general #  general slow
- type: log
  enabled: true
  paths:
    - {{slow_log_path}}  # Slow log path
  multiline.pattern: '^timestamp:'
  multiline.negate: true
  multiline.match: after
  fields_under_root: true
  fields:
    cluster_id: {{cluster_id}}  # Cluster id
    cluster_name: {{cluster_name}}  # Cluster name
    node_id: {{node_id}}  # Node id
    node_name: {{node_name}}  # Node name
    log_type: slow #  general slow

output.logstash:
  hosts: ["{{output_host}}"]  # host port of logstash

processors:
- drop_fields:
   fields: ["log","host","input","agent","ecs"]

Cluster profile list #

The cluster configuration file is the configuration file for the compute node.

YAML supports greater than 3M.

Configuration file nameLocationDescription
server.yamlconf directoryInitialization of the parameter file
logback.xmlconf directoryLog parameters profile
agent.yamlconf directoryCalculate the agent parameter file for the node
start.shbin directoryConfiguration file for runtime configuration of start-up parameters

server.yaml configuration description

  • Permission

Used to configure the initial user to log in to the compute node, and to store node data authorizations.

Description of configuration items

authority:
  users:
    - user: # Username and authorized host for logging in to the compute node, format: <username>@<hostname>, hostname is % or empty string for unrestricted authorized host
      password: # User password
    - privilege:
      type: # Permission provider type, default value is ALL_PERMITTED

Configuration examples

authority:
  users:
    - user: root@localhost
      password: root
    - user: my_user
      password: pwd
  privilege:
    type: ALL_PERMITTED

The above configuration indicates:

  • user root, which can connect to the Proxy from localhost only, with password root

  • the user my_user, who can connect to the Proxy from any host, with the password pwd.

  • The privilege type is ALL_PERMITTED, which means that all privileges are granted to the user, without authentication.

DATABASE_PERMITTED

authority:
  users:
    - user: root@localhost
      password: root
    - user: my_user
      password: pwd
  privilege:
    type: DATABASE_PERMITTED
    props:
      user-database-mappings: root@localhost=sharding_db, root@localhost=test_db, my_user@=sharding_db

The above configuration indicates:

  • privilege type is DATABASE_PERMITTED, indicating that database-level privileges are granted to the user and need to be configured.

  • user root can connect from the localhost host only and can access sharding_db and test_db;

  • The user my_user can connect from any host and can access the sharding_db.

  • Login authentication

Password authentication

DBPlusEngine-Proxy uses password authentication by default and is configured in the following format:

authority:
  users:
    - user: root@root
      password: root
    - user: sharding
      password: sharding

Two users are specified for DBPlusEngine in this configuration:

  • root: @% means that this user can access DBPlusEngine from any host and the password specifies the password as root.

  • sharding: This user is not assigned a host configuration, the default value is also @% and the password specifies the password as sharding.

When an administrator needs to restrict the login host for a specific user, this can be specified by username@host, e.g:

- user: user1@192.168.1.111password: user1_password

Indicates that the user1 user can only access DBPlusEngine via the address 192.168.1.111, with the authentication password user1_password.

LDAP authentication

Description:

  • Before enabling LDAP authentication, users should first deploy an LDAP server, such as OpenLDAP

  • When using a MySQL client, show cleartext-plugin enabled, e.g.: mysql -h 127.0.0.1 -P 3307 -u root -p -enable-cleartext-plugin

Configure LDAP in DBPlusEngine in the following way.

Example 1

Each user needs to be authenticated by LDAP and use the same DN template.

authority:
  users:
    - user: root@%
    - user: sharding
  authenticators:
    auth_ldap:
      type: LDAP
      props:
        ldap_server_url: ldap://localhost:389ldap_dn_template: cn={0},ou=users,dc=example,dc=org
  defaultAuthenticator: auth_ldap

This configuration specifies an authenticator auth_ldap, which is of type LDAP, and the necessary configuration is given in the props:

  • ldap_server_url: the address of the LDAP server to access

  • ldap_dn_template: user DN template

When using the above configuration, the user DNs corresponding to user root and sharding are:

  • rootcn=root,ou=users,dc=example,dc=org

  • shardingcn=sharding,ou=users,dc=example,dc=org

Example 2

Each user needs to be authenticated by LDAP, but using a different DN template.

authority:
  users:
    - user: root@%
      props:
        ldap_dn: cn=root,ou=admin,dc=example,dc=org
    - user: sharding
  authenticators:
    auth_ldap:
      type: LDAP
      props:
        ldap_server_url: ldap://localhost:389ldap_dn_template: cn={0},ou=users,dc=example,dc=org
  defaultAuthenticator: auth_ldap

The difference with example 1 is that user root is not in the same ou as other users, so a separate explicit user DN is assigned to root. When using the above configuration, the user DNs corresponding to user root and sharding are:

  • rootcn=root,ou=admin,dc=example,dc=org

  • shardingcn=sharding,ou=users,dc=example,dc=org

Hybrid authentication

Hybrid authentication means that some users use password authentication and some users use LDAP authentication, which is a flexible mix to meet the needs of specific security scenarios.

The configuration format for hybrid authentication is as follows:

authority:
  users:
    - user: root@%
      auth: auth_ldap
    - user: sharding
      password: sharding
    - user: user1
      password: password_user1
  authenticators:
    auth_ldap:
      type: LDAP
      props:
        ldap_server_url: ldap://localhost:389ldap_dn_template: cn={0},ou=users,dc=example,dc=org

In the above configuration, defaultAuthenticator is not specified and the default is to use the password authentication method. At the same time, by displaying the configuration auth: auth_ldap, an authenticator is specified for the user root, requiring the user to log in via LDAP authentication. When using the above configuration, the authentication methods for users root, sharding and user1 are

  • root:LDAP

  • sharding:password

  • user1:password

Note: In a mixed authentication scenario, the administrator can also enable LDAP authentication by default and set a small number of users to password authentication using auth: password.

logback.xml configuration notes

Refers to log configuration

agent.yaml configuration notes

Refers to Agent management