Logo
Containerized Deployment and Management

Containerized Deployment and Management #

Helm Deployment Method for SphereEx-DBPlusEngine-Proxy Cluster #

Background #

Use Helm to provide guidance for the installation of ShardingSphere-Proxy instance in a Kubernetes cluster.

Requirements #

kubernetes 1.20+

kubectl

helm 3.8.1+

StorageClass of PV (Persistent Volumes) can be dynamically applied for persistent data (Optional)

Procedure #

Minimum Installation #

Add ShardingSphere-Proxy to the local helm repo:

helm repo add sphereex https://xxx

Install ShardingSphere-Proxy charts:

helm install sphereex-dbplusengine sphereex/sphereex-dbplusengine

Configuration Installation #

Runtime Configuration Modification #

Install charts named sphereex-dbplusengine with configuration items including data persistence of the governance node Zookeeper, automatic download of mysql driver, and configuration of the governance center Namespace.

helm install sphereex-dbplusengine sphereex/sphereex-dbplusengine  --set governance.zookeeper.persistence.enabled=true --set governance.zookeeper.persistence.storageClass="<你的storageClass名称>" --set compute.mysqlConnector.version="<mysql驱动版本>" --set compute.serverConfig.mode.repository.props.namespace="governance_ds"
Modify the values.yaml file. For detailed configuration items, please refer to [Configuration Example](###Configuration Example) #
tar -zxvf sphereex-dbplusengine.tar.gz 

vim values.yaml 

helm install sphereex-dbplusengine

Uninstall #

helm uninstall sphereex-dbplusengine

Delete all release records by default, add --keep-history to keep them.

Example #

Governance-Node parameters #

NameDescriptionValue
governance.enabledSwitch to enable or disable the governance helm charttrue

Governance-Node ZooKeeper parameters #

NameDescriptionValue
governance.zookeeper.enabledSwitch to enable or disable the ZooKeeper helm charttrue
governance.zookeeper.replicaCountNumber of ZooKeeper nodes1
governance.zookeeper.persistence.enabledEnable persistence on ZooKeeper using PVC(s)false
governance.zookeeper.persistence.storageClassPersistent Volume storage class""
governance.zookeeper.persistence.accessModesPersistent Volume access modes["ReadWriteOnce"]
governance.zookeeper.persistence.sizePersistent Volume size8Gi
governance.zookeeper.resources.limitsThe resources limits for the ZooKeeper containers{}
governance.zookeeper.resources.requests.memoryThe requested memory for the ZooKeeper containers256Mi
governance.zookeeper.resources.requests.cpuThe requested cpu for the ZooKeeper containers250m

Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters #

Configuration ItemDescriptionValue
compute.image.repositoryImage name of ShardingSphere-Proxy.apache/shardingsphere-proxy
compute.image.pullPolicyThe policy for pulling ShardingSphere-Proxy imageIfNotPresent
compute.image.tagShardingSphere-Proxy image tag5.1.2
compute.imagePullSecrets.usernameUsername for pulling private repository""
compute.imagePullSecrets.passwordPassword for pulling private repository""
compute.resources.limitsThe resources limits for the ShardingSphere-Proxy containers{}
compute.resources.requests.memoryThe requested memory for the ShardingSphere-Proxy containers2Gi
compute.resources.requests.cpuThe requested cpu for the ShardingSphere-Proxy containers200m
compute.replicasNumber of cluster replicas3
compute.service.typeShardingSphere-Proxy network modeClusterIP
compute.service.portShardingSphere-Proxy expose port3307
compute.mysqlConnector.versionMySQL connector version5.1.49
compute.startPortShardingSphere-Proxy start port3307
compute.agent.enabledStart configuration for SphereEx-DBPlusEngine-Proxy enginefalse
compute.serverConfigServer Configuration file for ShardingSphere-Proxy""

Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters #

Configuration ItemDescriptionValue
serverConfig.authority.privilege.typeauthority provider for storage node, the default value is ALL_PERMITTEDALL_PERMITTED
serverConfig.authority.users[0].passwordPassword for compute noderoot
serverConfig.authority.users[0].userUsername,authorized host for compute node. Format: @hostname hostname is % or empty string means do not care about authorized hostroot@%

Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters #

Configuration ItemDescriptionValue
serverConfig.mode.typeNow only support Cluster modeCluster
serverConfig.mode.repository.props.namespaceNamespace of registry centergovernance_ds
serverConfig.mode.repository.props.server-listsServer lists of registry center{{ printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}
serverConfig.mode.repository.props.maxRetriesMax retries of client connection3
serverConfig.mode.repository.props.operationTimeoutMillisecondsMilliseconds of operation timeout5000
serverConfig.mode.repository.props.retryIntervalMillisecondsMilliseconds of retry interval500
serverConfig.mode.repository.props.timeToLiveSecondsSeconds of ephemeral data live60
serverConfig.mode.repository.typeType of persist repository. Now only support ZooKeeperZooKeeper

Compute-Node ShardingSphere-Proxy ServerConfiguration props Configuration parameters #

Configuration ItemDescriptionValue
compute.serverConfig.props.proxy-frontend-database-protocol-typeBackend protocol type (supports PostgreSQL, openGauss, MariaDB, MySQL). If you need to connect to a non-MySQL backend data source, please change this configuration item.MySQL

Note:

Support setting environment variable CGROUP_ MEM_ OPTS: used to set related memory parameters in the container environment. The default values in the script are:

-XX:InitialRAMPercentage=80.0 -XX:MaxRAMPercentage=80.0 -XX:MinRAMPercentage=80.0

Sample #

values.yaml

#
#   Copyright © 2022,Beijing Sifei Software Technology Co., LTD.
#   All Rights Reserved.
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
#

## @section Governance-Node parameters
## @param governance.enabled Switch to enable or disable the governance helm chart
##
governance:
  enabled: true
  ## @section Governance-Node ZooKeeper parameters
  zookeeper:
    ## @param governance.zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
    ##
    enabled: true
    ## @param governance.zookeeper.replicaCount Number of ZooKeeper nodes
    ##
    replicaCount: 3
    ## ZooKeeper Persistence parameters
    ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
    ## @param governance.zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
    ## @param governance.zookeeper.persistence.storageClass Persistent Volume storage class
    ## @param governance.zookeeper.persistence.accessModes Persistent Volume access modes
    ## @param governance.zookeeper.persistence.size Persistent Volume size
    ##
    persistence:
      enabled: true
      storageClass: ""
      accessModes:
        - ReadWriteOnce
      size: 8Gi
    ## ZooKeeper's resource requests and limits
    ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
    ## @param governance.zookeeper.resources.limits The resources limits for the ZooKeeper containers
    ## @param governance.zookeeper.resources.requests.memory The requested memory for the ZooKeeper containers
    ## @param governance.zookeeper.resources.requests.cpu The requested cpu for the ZooKeeper containers
    ##
    resources:
      limits: {}
      requests:
        memory: 4Gi
        cpu: 2

## @section Compute-Node parameters
## 
compute:
  ## @section Compute-Node ShardingSphere-Proxy parameters
  ## ref: https://kubernetes.io/docs/concepts/containers/images/
  ## @param compute.image.repository Image name of ShardingSphere-Proxy.
  ## @param compute.image.pullPolicy The policy for pulling ShardingSphere-Proxy image
  ## @param compute.image.tag ShardingSphere-Proxy image tag
  ##
  image:
    repository: "uhub.service.ucloud.cn/sphere-ex/sphereex-dbplusengine-proxy"
    pullPolicy: IfNotPresent
    ## Overrides the image tag whose default is the chart appVersion.
    ##
    tag: "1.3.1"
  imagePullSecrets:
    username: ""
    password: ""
  ## ShardingSphere-Proxy resource requests and limits
  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
  ## @param compute.resources.limits The resources limits for the ShardingSphere-Proxy containers
  ## @param compute.resources.requests.memory The requested memory for the ShardingSphere-Proxy containers
  ## @param compute.resources.requests.cpu The requested cpu for the ShardingSphere-Proxy containers
  ##
  resources:
    requests:
      cpu: 2
      memory: 2Gi
    limits:
      cpu: 4
      memory: 4Gi
  ## ShardingSphere-Proxy Deployment Configuration
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/
  ## @param compute.replicas Number of cluster replicas
  ##
  replicas: 3
  ## @param compute.service.type ShardingSphere-Proxy network mode
  ## @param compute.service.port ShardingSphere-Proxy expose port
  ##
  service:
    type: LoadBalancer
    port: 3307
  ## MySQL connector Configuration
  ## ref: https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/
  ## @param compute.mysqlConnector.version MySQL connector version
  ##
  mysqlConnector:
    version: ""
  ## @param compute.startPort ShardingSphere-Proxy start port
  ## ShardingSphere-Proxy start port
  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/startup/docker/
  ##
  startPort: 3307
  agent:
    enabled: true
  terminationGracePeriodSeconds: 30
  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration parameters
  ## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
  ## otherwise please fill in the correct zookeeper address
  ## The server.yaml is auto-generated based on this parameter.
  ## If it is empty, the server.yaml is also empty.
  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/builtin-algorithm/metadata-repository/
  ##
  serverConfig:
    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters
    ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
    ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/yaml-config/authentication/
    ## @param compute.serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
    ## @param compute.serverConfig.authority.users[0].password Password for compute node.
    ## @param compute.serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
    ##
    authority:
      privilege:
        type: ALL_PERMITTED
      users:
        - password: root
          user: root@%
    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters
    ## @param compute.serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
    ## @param compute.serverConfig.mode.repository.props.namespace Namespace of registry center
    ## @param compute.serverConfig.mode.repository.props.server-lists Server lists of registry center
    ## @param compute.serverConfig.mode.repository.props.maxRetries Max retries of client connection
    ## @param compute.serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
    ## @param compute.serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
    ## @param compute.serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
    ## @param compute.serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper
    ## @param compute.serverConfig.mode.overwrite Whether overwrite persistent configuration with local configuration
    ##
    mode:
     repository:
       props:
         maxRetries: 3
         namespace: ""
         operationTimeoutMilliseconds: 5000
         retryIntervalMilliseconds: 500
         server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
         timeToLiveSeconds: 600
       type: ZooKeeper
     type: Cluster
    props:
      proxy-frontend-database-protocol-type: MySQL

Auto Scaling on Cloud (HPA) #

Definition #

In Kubernetes, the HorizontalPodAutoscaler (HPA) feature can automatically update workload resources to meet requirements and scale workloads accordingly.

SphereEx-Operator utilizes this feature along with relevant indicators from SphereEx-DBPlusEngine to enable automatic scaling of capacity in the Kubernetes cluster during operation.

Once the auto scaling feature is enabled, the SphereEx-Operator will apply an HPA object in the Kubernetes cluster when deploying a SphereEx-DBPlusEngine cluster. This ensures that the cluster can dynamically adjust its resources based on demand, optimizing performance and minimizing downtime.

  • HPA

The HPA (HorizontalPodAutoscaler) is a feature in Kubernetes that automatically scales workload resources to meet demand. SphereEx-Operator leverages this feature and combines the relevant indicators of SphereEx-DBPlusEngine to auto scale the capacity of the cluster in a Kubernetes environment. Enabling the auto scaling feature allows the SphereEx-Operator to apply an HPA object in the Kubernetes cluster while deploying the SphereEx-DBPlusEngine cluster, which can help ensure efficient resource utilization and improved performance.

Impact on the System #

After enabling auto scaling, manually setting the replica count of SphereEx-DBPlusEngine in Kubernetes will no longer have any effect. The number of replicas in the cluster will be controlled by the minimum and maximum values set in the HPA controller, allowing for elastic scaling between these two values.

The minimum number of replicas for SphereEx-DBPlusEngine will also be determined by the minimum value set in the HPA configuration. Once auto scaling is enabled, the cluster will start with the minimum number of replicas specified by the HPA.

It’s important to note that HPA enables horizontal scaling for SphereEx-DBPlusEngine. This means that when the workload increases, more pods are deployed to handle the load.

This is in contrast to vertical scaling, where additional resources (such as memory or CPU) are allocated to the existing pods in the workload.

Limitations #

  • Currently, due to insufficient indexes in SphereEx-DBPlusEngine, stress load testing can only be performed through the runtime CPU. In the future, additional indicators will be added to enrich the runtime pressure calculation method of SphereEx-DBPlusEngine.
  • To use the HPA function of SphereEx-DBPlusEngine in Kubernetes, you must install the metrics-server in your cluster and ensure that you can use the kubectl top function normally.
  • When creating the SphereEx-DBPlusEngine cluster, the SphereEx-Operator will establish load balancing in front of the cluster, linking your application and SphereEx-DBPlusEngine through load balancing.
  • Since SphereEx-DBPlusEngine establishes long links with your application, scaling-out may not significantly reduce the load on existing long links. The effect of scaling-out will only apply to newly established links.
  • There may be corresponding issues when scaling-in. During the scaling-in process, your application may experience disruptions because the reduced SphereEx-DBPlusEngine copy is in the process of scaling-in, and will be removed from the load balance, resulting in the destruction of the long link between your application and SphereEx-DBPlusEngine.

How it works #

Scaling with HPA involves horizontal scaling of SphereEx-DBPlusEngine. Horizontal scaling refers to deploying more pods in response to increased load.

This is distinct from vertical scaling, which involves allocating more resources (such as memory or CPU) to a running pod.

If the load decreases and the number of pods is higher than the minimum configured value, the horizontalpodautoscaler will indicate that the workload resources (deployment, statefulset, or other similar resources) should be reduced.

In a Kubernetes cluster, a controller will periodically query the indicators in the HPA associated with the relevant resources. After the threshold for the indicator is met, the corresponding resources will scale-out or scale-in according to the calculation formula.

In the SphereEx-Operator working process, the HPA object acts on the deployment object of SphereEx-DBPlusEngine, continuously querying the CPU utilization of each SphereEx-DBPlusEngine copy.

The CPU utilization of SphereEx-DBPlusEngine obtains the CPU usage from the container /sys/fs/cgroup/cpu/cpuacct.usage, and the value set in the automaticScaling.target field in the shardingsphere.sphere-ex.com/v1alpha1.proxy is used as the percentage of the threshold value for continuous calculation.

When the calculated value reaches the threshold, the HPA controller calculates the number of copies according to the following formula:

Expected number of copies = ceil[Current number of copies * (Current indicators / Expected indicators)]

It is important to note that the CPU utilization index is the CPU value in the resources.requests field of each copy.

Before checking the tolerance and determining the final value, the control plane will also consider whether any indicators are missing and how many pods are ready.

When using CPU metrics for scaling, any pod that is not ready (such as those still initializing or that may be unhealthy) or for which the latest indicator measurement was collected before the pod reached the ready state, will also be put on hold.

Schematic diagram:

Schematic diagram

Parameters #

NameDescriptionDefault Value
automaticScaling.enableSphereEx-DBPlusEngine-Proxy whether the cluster starts auto scalingFALSE
automaticScaling.scaleUpWindowsSphereEx-DBPlusEngine-Proxy auto scaling-out stable window30
automaticScaling.scaleDownWindowsSphereEx-DBPlusEngine-Proxy auto scaling-in stable window30
automaticScaling.targetSphereEx-DBPlusEngine-Proxy threshold value of auto scaling is percentage. Note: at this stage, only CPU is supported for scaling.70
automaticScaling.maxInstanceSphereEx-DBPlusEngine-Proxy maximum number of scaling-out copies4
automaticScaling.minInstanceSphereEx-DBPlusEngine-Proxy minimum number of startup copies, and the scaling-in size will not be less than this number of copies1

Notes

When the automaticScaling function of SphereEx-DBPlusEngine is turned on, HPA takes over the number of SphereEx-DBPlusEngine copies, and scaling-in may occur, causing the application to briefly flash.

Enabling the automaticScaling function of SphereEx-DBPlusEngine results in the corresponding HPA being deleted.

Procedure #

  • After modifying values.yaml according to the following configuration, execute helm install to create a new SphereEx-DBPlusEngine cluster.
  • Or use helm upgrade to update the existing SphereEx-DBPlusEngine cluster configuration.

Sample #

If you want to turn on the auto scaling function of SphereEx-DBPlusEngine in SphereEx-Operator, you need to open the following configuration in the values.yaml of SphereEx-DBPlusEngine-cluster charts.

automaticScaling:
  enable: true
  scaleUpWindows: 30
  scaleDownWindows: 30
  target: 20
  maxInstance: 4
  minInstance: 2

Using Operator #

What is DBPlusEngine-Operator #

Kubernetes’ operator mode allows you to expand the capabilities of the cluster by associating controllers for one or more custom resources, without modifying Kubernetes’ own code. Operators act as the Kubernetes API client and serve as the custom resource controllers.

The operator mode is designed to meet the key objectives of DevOps teams who are responsible for managing one or a group of services.

DBPlusEngine-Operator enables users to quickly deploy a set of DBPlusEngine-Proxy clusters in a Kubernetes environment. It is responsible for deploying and maintaining relevant resources around the cluster, as well as monitoring the cluster’s status.

DBPlusEngine-Mate is a governance center component developed by SphereEx based on Kubernetes cloud native.

Terms #

CRD (customresourcedefinition) user-defined resource definition means that DBPlusEngine-Operator will deploy a complete set of DBPlusEngine-Proxy clusters in kubernetes cluster by using CR (customresource) defined by CRD.

Advantages #

  • Simple Configuration:

Deploying a complete set of DBPlusEngine-Proxy clusters in the cluster is as easy as writing a simple YAML file.

  • Easy to Customize:

By modifying the CR YAML file, features such as horizontal scaling can be easily added or customized.

  • Simple Operation and Maintenance:

Using DBPlusEngine-Operator does not interfere with the status of DBPlusEngine-Proxy in the cluster. The operator automatically detects the status of the cluster and corrects any issues, making operation and maintenance simple and hassle-free.

Architecture #

Architecture

Install DBPlusEngine-Operator #

Configure[Operator Parameters](#Operator Parameters), configuration file located in dbplusengine-operator/values.yaml.

Run

kubectl create ns  dbplusengine-operator
helm install dbplusengine-operator dbplusengine-operator -n dbplusengine-operator

Install DBPlusEngine-Proxy cluster #

Configure the[Cluster Parameters](#Cluster Parameters)configuration file located in dbplusengine-proxy/values.yaml.

Move the sphere-ex.license to dbplusengine-proxy/license, and keep the name sphere-ex.license.

kubectl create ns  dbplusengine
helm install  dbplusengine-proxy dbplusengine-proxy -n dbplusengine

Operator Parameters

DBPlusEngine-Proxy operator parameters

NameDescriptionValue
replicaCountoperator replica count2
image.repositoryoperator image namesphereex/dbplusengine-operator
image.pullPolicymirror pull policyIfNotPresent
image.tagimage tag0.0.1
imagePullSecretsimage pulls key of private repository[]
resourcesresources required by the operator{}
webhook.portoperator webhook boot port9443
health.healthProbePortoperator health check port8081

Cluster Parameters

DBPlusEngine-Proxy cluster parameters

NameDescriptionValue
replicaCountDBPlusEngine-Operator-Cluster cluster starts the number of replicas, Note: after you enable automaticScaling, this parameter will no longer take effect"1"
automaticScaling.enableWhether the DBPlusEngine-Operator-Cluster cluster has auto-scaling enabledfalse
automaticScaling.scaleUpWindowsDBPlusEngine-Operator-Cluster automatically scales the stable window30
automaticScaling.scaleDownWindowsDBPlusEngine-Operator-Cluster automatically shrinks the stabilized window30
automaticScaling.targetDBPlusEngine-Operator-Cluster auto-scaling threshold, the value is a percentage. Note: at this stage, only cpu is supported as a metric for scaling70
automaticScaling.maxInstanceDBPlusEngine-Operator-Cluster maximum number of scaled-out replicas4
automaticScaling.minInstanceDBPlusEngine-Operator-Cluster has a minimum number of boot replicas, and the shrinkage will not be less than this number of replicas1
image.registryDBPlusEngine-Operator-Cluster image hostdocker.io
image.repositoryDBPlusEngine-Operator-Cluster image repository namesphereex/dbplusengine-proxy
image.tagDBPlusEngine-Operator-Cluster image tag5.1.2
resourcesDBPlusEngine-Operator-Cluster starts the requirement resource, and after opening automaticScaling, the resource of the request multiplied by the percentage of target is used to trigger the scaling action{}
service.typeDBPlusEngine-Operator-Cluster external exposure modeClusterIP
service.portDBPlusEngine-Operator-Cluster exposes the port to the outside world3307
startPortDBPlusEngine-Operator-Cluster boot port3307
imagePullSecretsDBPlusEngine-Operator-Cluster private image repository key[]
mySQLDriver.versionThe DBPlusEngine-Operator-Cluster mysql driver version will not be downloaded if it is empty""
GN.modeDBPlusEngine-Operator-Cluster governance center mode, supporting sidecar/zookeeperzookeeper
GN.SidecarRegistryDBPlusEngine-Operator-Cluster sidecar mode image host<image warehouse host>
GN.SidecarRepositoryDBPlusEngine-Operator-Cluster sidecar mode image warehouse namesphereex/dbplusengine-sidecar
GN.SidecarTagDBPlusEngine-Operator-Cluster sidecar mode image tag0.2.0
GN.sidecarServerAddrDBPlusEngine-Operator-Cluster sidecar mode image server side addressserver address
withAgentDBPlusEngine-Operator-Cluster if active agent parameterfalse

Compute Node DBPlusEngine-Operator-Cluster Server Authority Configuration Items

NameDescriptionValue
serverConfig.authority.privilege.typeThe provider type of data authorization for storage nodes, with the default value of ALL_PERMITTED.ALL_PERMITTED
serverConfig.authority.users[0].passwordThe password used to log in to the calculation node.root
serverConfig.authority.users[0].userThe username used to login to the compute node, the authorized host. Format: @ hostname as % or an empty string indicates no restriction on the authorized hostroot@%

Compute Node DBPlusEngine-Operator-Cluster Server Mode Configuration Items

NameDescriptionValue
serverConfig.mode.typeThe running mode type. At this stage, only cluster mode is supportedCluster
serverConfig.mode.repository.props.namespaceRegistry center namespacegovernance_ds
serverConfig.mode.repository.props.server-listsRegistry center connection address{{ printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}
serverConfig.mode.repository.props.maxRetriesThe maximum number of client connections retries3
serverConfig.mode.repository.props.operationTimeoutMillisecondsThe number of milliseconds that the client operation timed out5000
serverConfig.mode.repository.props.retryIntervalMillisecondsThe number of milliseconds between retries500
serverConfig.mode.repository.props.timeToLiveSecondsThe number of seconds that temporary data invalidated60
serverConfig.mode.repository.typePersist repository type. Only ZooKeeper is supported at this stageZooKeeper

Description: Support setting environment variable CGROUP_ MEM_ OPTS: used to set related memory parameters in the container environment. The default values in the script are:

-XX:InitialRAMPercentage=80.0 -XX:MaxRAMPercentage=80.0 -XX:MinRAMPercentage=80.0

Governance Node ZooKeeper Configuration Item

Configuration ItemDescriptionValue
zookeeper.enabledUsed to switch whether use ZooKeeper charttrue
zookeeper.replicaCountNumber of ZooKeeper nodes1
zookeeper.persistence.enabledIdentifies whether ZooKeeper uses PersistentVolumeClaim to apply for PersistentVolumefalse
zookeeper.persistence.storageClassStorageClass for PersistentVolume""
zookeeper.persistence.accessModesAccess mode of PersistentVolume["ReadWriteOnce"]
zookeeper.persistence.sizePersistentVolume size8Gi

Sample #

dbplusengine-operator/values.yaml

## @section DBPlusEngine-Operator-Cluster operator parameters
## @param replicaCount operator 副本数
##
replicaCount: 2
image:
  ## @param image.repository operator 镜像名
  ##
  repository: "sphere-ex/dbplusengine-operator"
  ## @param image.pullPolicy 镜像拉取策略
  ##
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  ## @param image.tag 镜像tag
  ##
  tag: "0.1.0"
## @param imagePullSecrets 私有仓库镜像拉取密钥
## e.g:
## imagePullSecrets:
##   - name: mysecret
##
imagePullSecrets: []
## @param resources operator 需要的资源
## e.g:
## resources:
##   limits:
##     cpu: 2
##   limits:
##     cpu: 2
##
resources: {}
## @param webhook.port operator webhook启动端口
##
webhook:
  port: 9443
## @param health.healthProbePort operator 健康检查端口
##
health:
  healthProbePort: 8081

dbplusengine-proxy/values.yaml

#
# Copyright © 2022,Beijing Sifei Software Technology Co., LTD.
# All Rights Reserved.
# Unauthorized copying of this file, via any medium is strictly prohibited.
# Proprietary and confidential
#

# @section DBPlusEngine-Proxy cluster parameters
## @param replicaCount DBPlusEngine-Proxy 集群启动副本数,注意:在开启 automaticScaling 后,这个参数将不再生效
##
replicaCount: "2"

#
##
##
GN:
  mode: zookeeper
  SidecarRegistry: uhub.service.ucloud.cn
  SidecarRepository: sphere-ex/dbplusengine-sidecar
  SidecarTag: "0.2.0"
  sidecarServerAddr: "so-dbplusengine-operator.ss"
## @param automaticScaling.enable DBPlusEngine-Proxy 集群是否开启自动扩缩容
## @param automaticScaling.scaleUpWindows DBPlusEngine-Proxy 自动扩容稳定窗口
## @param automaticScaling.scaleDownWindows DBPlusEngine-Proxy 自动缩容稳定窗口
## @param automaticScaling.target DBPlusEngine-Proxy 自动扩缩容阈值,数值为百分比,注意:现阶段暂时只支持 cpu 为指标进行扩缩容
## @param automaticScaling.maxInstance DBPlusEngine-Proxy 最大扩容副本数
## @param automaticScaling.minInstance DBPlusEngine-Proxy 最小启动副本数,缩容不会小于这个副本数
##
automaticScaling:
  enable: false
  scaleUpWindows: 30
  scaleDownWindows: 30
  target: 20
  maxInstance: 4
  minInstance: 1
## @param image.registry DBPlusEngine-Proxy 镜像host
## @param image.repository DBPlusEngine-Proxy 镜像仓库名
## @param image.tag DBPlusEngine-Proxy 镜像tag
##
image:
  registry: uhub.service.ucloud.cn
  repository: sphere-ex/dbplusengine-proxy
  tag: "1.2.0"
withAgent: false
## @param resources DBPlusEngine-Proxy 启动需求资源,在开启automaticScaling 后,以 request 的资源乘以 target 的百分比为触发扩缩容动作的实际使用率
## e.g:
## resources:
##   limits:
##     cpu: 2
##   requests:
##     cpu: 2
##
resources:
  limits:
    cpu: '2'
  requests:
    cpu: '2'
## @param service.type DBPlusEngine-Proxy 对外暴露方式
## @param service.port DBPlusEngine-Proxy 对外暴露端口
##
service:
  type: ClusterIP
  port: 3307
## @param startPort DBPlusEngine-Proxy 启动端口
##
startPort: 3307
## @param imagePullSecrets DBPlusEngine-Proxy 私有镜像仓库密钥
## e.g:
## imagePullSecrets:
##   - name: mysecret
##
imagePullSecrets:
  username: ""
  password: ""
## @param mySQLDriver.version DBPlusEngine-Proxy mysql 驱动版本,如果为空,将不下载驱动
##
mySQLDriver:
  version: "5.1.43"
## @section  DBPlusEngine-Proxy ServerConfiguration parameters
## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
## otherwise please fill in the correct zookeeper address
## The server.yaml is auto-generated based on this parameter.
## If it is empty, the server.yaml is also empty.
## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/builtin-algorithm/metadata-repository/
##
serverConfig:
  ## @section Compute-Node DBPlusEngine-Proxy ServerConfiguration authority parameters
  ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
  ## @param serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
  ## @param serverConfig.authority.users[0].password Password for compute node.
  ## @param serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
  ##
  authority:
    privilege:
      type: ALL_PERMITTED
    users:
      - password: root
        user: root@%
  ## @section Compute-Node DBPlusEngine-Proxy ServerConfiguration mode Configuration parameters
  ## @param serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
  ## @param serverConfig.mode.repository.props.namespace Namespace of registry center
  ## @param serverConfig.mode.repository.props.server-lists Server lists of registry center
  ## @param serverConfig.mode.repository.props.maxRetries Max retries of client connection
  ## @param serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
  ## @param serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
  ## @param serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
  ## @param serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper

  ##
#  mode:
#    repository:
#      props:
#        namespace: matenamespace
#        server-lists: "127.0.0.1:21506"
#      type: SphereEx:MATE
#    type: Cluster
  mode:
    repository:
      props:
        maxRetries: 3
        namespace: governance_ds
        operationTimeoutMilliseconds: 5000
        retryIntervalMilliseconds: 500
        server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
        timeToLiveSeconds: 600
      type: ZooKeeper
    type: Cluster
## @section ZooKeeper chart parameters

## ZooKeeper chart configuration
## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
##
zookeeper:
  ## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
  ##
  enabled: true
  ## @param zookeeper.replicaCount Number of ZooKeeper nodes
  ##
  replicaCount: 1
  ## ZooKeeper Persistence parameters
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
  ## @param zookeeper.persistence.storageClass Persistent Volume storage class
  ## @param zookeeper.persistence.accessModes Persistent Volume access modes
  ## @param zookeeper.persistence.size Persistent Volume size
  ##
  persistence:
    enabled: false
    storageClass: ""
    accessModes:
      - ReadWriteOnce
    size: 8Gi

Clean #

helm uninstall dbplusengine-proxy -n dbplusengine

helm uninstall dbplusengine-operator -n dbplusengine-operator

kubectl delete crd clusters.dbplusengine.sphere-ex.com \
proxyconfigs.dbplusengine.sphere-ex.com \
plocks.dbplusengine.sphere-ex.com \
pmetadata.dbplusengine.sphere-ex.com \
pnodes.dbplusengine.sphere-ex.com \
ppipelines.dbplusengine.sphere-ex.com \
psys.dbplusengine.sphere-ex.com \
pworkids.dbplusengine.sphere-ex.com