JVM性能调优实战之UseParallelGC

前言
在生产环境中,你是否遇到过这样的问题?
1)系统运行一段时间后old区空间正逐渐减少?
2)遇到秒杀促销活动等场景old区存储突然暴增,导致JVM Full GC。fgc time持续过长导致cpu 100%?
3)jvm eden区分配的内存实际并没有按照配置的指定?survivor区对象晋升到old区并没有达到默认的15次?
4)如何避免Full GC,JVM产生FGC后如何解决?
……
接下来的内容,我将从实际场景出发,分别使用三种垃圾收集器:ParallelGC,Concurrent Mark Sweep(cms),G1,进行参数调优,解决我们在生产中jvm遇到的各种问题。jvm调优会作为专题持续更新。

以下GC调优默认使用JDK1.8

JDK1.8默认使用ParallelGC。新生代采用的是Parallel Scavenge,老年代Parallel Old。
并发垃圾收集器调优的内容一般为:
1)关闭jvm自动分配策略。
2)survivior空间调优。
通过这两点调整,使创建的对象按照设定的阈值执行。

关闭JVM自动分配策略

先来看默认情况下jvm内存空间的分配情况:

-server -Xms4G -Xmx4G -Xss512K 
-XX:+PrintGCDetails 
-XX:+PrintGCDateStamps 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/deploy_service_heap.log -verbose:gc -Xloggc:/deploy_service_gc.log

我们使用jdk1.8自带的jvisualvm工具监控查看运行情况,如图:

TIPS:对于并发类型的GC来说,jvm默认开启了内存自动适配策略参数是UseAdaptiveSizePolicy。使用-XX:-UseAdaptiveSizePolicy来关闭jvm的自动适配策略。

Survivior空间调优

通常的网文会说-XX:SurvivorRatio这个参数默认为8,意思是eden:s1:s2 = 8:1:1。这个比值到底要不要改,修改后效果就一定好吗?答案是否定的。举例说明:如果改成6(6:2:2)或者4(4:3:3)增加了survivor空间的同时缩减了eden区空间,这会导致eden区由于内存分配不够ygc的频次增加,每ygc一次,存活的对象就会进入survivor,但实际情况是survivor中的对象会很少,作者的应用程序qps200~500之间,持续时间为24小时,但survivor中的对象总量最大不超过70M,所以扩大survivor区空间是在浪费资源。
survivor区真正要调整的是TargetSurvivorRatio,MaxTenuringThreshold这两个参数。
TargetSurvivorRatio:表示目标survivor区存储率超过指定百分比时,会重新计算一次TenuringThreshold值。在生产中如果你遇到这种情况,说明需要调整TargetSurvivorRatio了。如图:

图中survivor区在每次ygc后都会重新计算一次threshold的值(默认最大为15),说明在survivor区中的对象并不都是经过15次以上的ygc才进入老年代。
TIPS:在并发类型的GC中,-XX:-UseAdaptiveSizePolicy保证新生代内存分配按照你指定的参数执行。-XX:TargetSurvivorRatio=80 -XX:MaxTenuringThreshold=15 让目标survivor使用空间达到指定的百分比才会重新计算threshold值。如此一来,survivor中大部分的对象会活到15岁以上才进入老年代。优点是:阻止新对象快速进入old区将其填满,导致old区空间使用率100%而触发full gc。
通过上述调整后,我们来看一下jvm运行的效果:

我们看到eden区实际内存分配为1.6G,survivor区为204.5M,并且survivor区内的对象全部都在第16次ygc后进入了old区,这才是我们想要的效果!

另外补充说明一下,ParallelGC优调当然远不止这些,比如还有一些协助参数:-XX:+PrintTenuringDistribution 这个在jvm启动后每次进行内存分配可以打印出详细的内存分配情况。

以上是作者通过实际生产监控发现的问题,并整理出一些容易被我们忽略的问题点。欢迎各位读者评论区留言进行经验分享和探讨。

在服务器中部署Docker Trojan

老徐自己的服务器在北京某机房,偶尔心血来潮需要拉取github代码就显得非常的不方便。Trojan是个好东西也非常的好用,老徐在桌面端一直是使用qv2ray客户端软件,在服务器端Trojan仅仅只提供socks协议代理,在某些情况下还是http协议代理比较方便。本篇介绍一下在服务器端安装基于Docker的Trojan客户端并且提供socks和http协议的代理服务。

创建数据卷

docker volume create trojan_data

部署Trojan

docker run -itd \
 --name trojan \
 --hostname trojan \
 --restart=always \
 -e REMOTE_ADDR="你自己的服务器地址" \
 -e PASSWORD="你自己的密码" \
 -v trojan_data:/trojan \
 -v /etc/localtime:/etc/localtime \
 -v /var/run/docker.sock:/var/run/docker.sock \
 --net=docker-network \
 --ip=172.19.0.8 \
 andyzhshg/trojan-privoxy:latest

这里需要注意一下--net以及--ip指令,需要替换成你们自己创建的网络或者干脆把那两行删掉也行。老徐自己的服务器偶尔会重启,由于Docker不保证每一个容器的启动顺序和分配IP的顺序,所以老徐手动指定了网络和IP。

忽略验证证书

{
    "run_type": "client",
    "local_addr": "127.0.0.1",
    "local_port": 1080,
    "remote_addr": "example.com",
    "remote_port": 443,
    "password": [
        "password1"
    ],
    "log_level": 1,
    "ssl": {
        "verify": false,
        "verify_hostname": false,
        "cert": "",
        "cipher": "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:AES128-SHA:AES256-SHA:DES-CBC3-SHA",
        "cipher_tls13": "TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384",
        "sni": "",
        "alpn": [
            "h2",
            "http/1.1"
        ],
        "reuse_session": true,
        "session_ticket": false,
        "curves": ""
    },
    "tcp": {
        "no_delay": true,
        "keep_alive": true,
        "reuse_port": false,
        "fast_open": false,
        "fast_open_qlen": 20
    }
}

Docker Trojan客户端容器默认需要验证证书,如果你和老徐一样不需要验证证书那就自己动手修改配置吧。

配置文件

vim /var/lib/docker/volumes/trojan_data/_data/examples/client.json-example

食用方法

安装proxychains4

yum -y install proxychains4

修改proxychains4配置

vim /etc/proxychains.conf

添加代理

socks5 172.19.0.8 1086

IP地址为你自己容器的IP和端口

写在最后

这款Docker Trojan客户端容器由网上某位大神封装,在文末我会附上他的Github地址,如果你对这个项目比较感兴趣也可以给个star以资鼓励。

参考连接

https://github.com/andyzhshg/trojan-privoxy

Java实现LFU算法

LFU(Least Frequently Used)算法,即最少访问算法,根据访问缓存的历史频率来淘汰数据,核心思想是“如果数据在过去一段时间被访问的次数很少,那么将来被访问的概率也会很低”。

手撕LFU算法

package com.github.xuchengen.cache;

import java.util.HashMap;
import java.util.Map;
import java.util.Objects;

public class LRUCache<K, V> {

    private Entry<K, V> head;
    private Entry<K, V> tail;
    private Map<K, Entry<K, V>> cache;
    private int capacity;
    private int size;

    public LRUCache(int capacity) {
        this.capacity = capacity;
        this.size = 0;
        this.cache = new HashMap<>();
        this.head = new Entry<>();
        this.tail = new Entry<>();

        head.next = tail;
        tail.pre = head;
    }

    public V get(K key) {
        Entry<K, V> entry = cache.get(key);
        if (Objects.isNull(entry)) {
            return null;
        }

        moveToHead(entry);
        return entry.value;
    }

    public void put(K key, V value) {
        Entry<K, V> entry = cache.get(key);
        if (Objects.nonNull(entry)) {
            entry.value = value;
            cache.put(key, entry);
            moveToHead(entry);
            return;
        }

        if (capacity <= size) {
            Entry<K, V> lastEntry = tail.pre;
            cache.remove(lastEntry.key);
            remove(lastEntry);
            size--;
        }

        Entry<K, V> newEntry = new Entry<>(key, value);
        cache.put(key, newEntry);
        add(newEntry);
        size++;
    }

    private void moveToHead(Entry<K, V> entry) {
        remove(entry);
        add(entry);
    }

    private void add(Entry<K, V> entry) {
        head.next.pre = entry;
        entry.next = head.next;

        head.next = entry;
        entry.pre = head;
    }

    private void remove(Entry<K, V> entry) {
        entry.pre.next = entry.next;
        entry.next.pre = entry.pre;
    }

    private static class Entry<K, V> {
        private Entry<K, V> pre;
        private Entry<K, V> next;
        private K key;
        private V value;

        public Entry() {
        }

        public Entry(K key, V value) {
            this.key = key;
            this.value = value;
        }
    }

    public static void main(String[] args) {
        LRUCache<Integer, Integer> cache = new LRUCache<>(2);
        cache.put(1, 1);
        cache.put(2, 2);
        System.out.println(cache.get(2));
        cache.put(3, 3);
        System.out.println(cache.get(1));
        System.out.println(cache.get(2));
        System.out.println(cache.get(3));
    }
}

 

Java实现LRU算法

LRU是Least Recently Used的缩写,即最近最少使用,是一种常用的页面置换算法,选择最近最久未使用的页面予以淘汰。 该算法赋予每个页面一个访问字段,用来记录一个页面自上次被访问以来所经历的时间t,当须淘汰一个页面时,选择现有页面中其t值最大的,即最近最少使用的页面予以淘汰。

基于LinkedHashMap实现LRU

package com.github.xuchengen.cache;

import java.util.LinkedHashMap;
import java.util.Map;

public class SimpleLRUCache<K, V> extends LinkedHashMap<K, V> {

    private int capacity;

    public SimpleLRUCache(int capacity) {
        super(16, 0.75F, true);
        this.capacity = capacity;
    }


    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return super.size() > capacity;
    }

    public static void main(String[] args) {
        SimpleLRUCache<Integer, Integer> cache = new SimpleLRUCache<>(2);
        cache.put(1, 1);
        cache.put(2, 2);
        System.out.println(cache.get(2));
        cache.put(3, 3);
        System.out.println(cache.get(1));
        System.out.println(cache.get(2));
        System.out.println(cache.get(3));
    }
}

手撕LRU算法

package com.github.xuchengen.cache;

import java.util.HashMap;
import java.util.Map;
import java.util.Objects;

public class LRUCache<K, V> {

    private Entry<K, V> head;
    private Entry<K, V> tail;
    private Map<K, Entry<K, V>> cache;
    private int capacity;
    private int size;

    public LRUCache(int capacity) {
        this.capacity = capacity;
        this.size = 0;
        this.cache = new HashMap<>();
        this.head = new Entry<>();
        this.tail = new Entry<>();

        head.next = tail;
        tail.pre = head;
    }

    public V get(K key) {
        Entry<K, V> entry = cache.get(key);
        if (Objects.isNull(entry)) {
            return null;
        }

        moveToHead(entry);
        return entry.value;
    }

    public void put(K key, V value) {
        Entry<K, V> entry = cache.get(key);
        if (Objects.nonNull(entry)) {
            entry.value = value;
            cache.put(key, entry);
            moveToHead(entry);
            return;
        }

        if (capacity <= size) {
            Entry<K, V> lastEntry = tail.pre;
            cache.remove(lastEntry.key);
            remove(lastEntry);
            size--;
        }

        Entry<K, V> newEntry = new Entry<>(key, value);
        cache.put(key, newEntry);
        add(newEntry);
        size++;
    }

    private void moveToHead(Entry<K, V> entry) {
        remove(entry);
        add(entry);
    }

    private void add(Entry<K, V> entry) {
        head.next.pre = entry;
        entry.next = head.next;

        head.next = entry;
        entry.pre = head;
    }

    private void remove(Entry<K, V> entry) {
        entry.pre.next = entry.next;
        entry.next.pre = entry.pre;
    }

    private static class Entry<K, V> {
        private Entry<K, V> pre;
        private Entry<K, V> next;
        private K key;
        private V value;

        public Entry() {
        }

        public Entry(K key, V value) {
            this.key = key;
            this.value = value;
        }
    }

    public static void main(String[] args) {
        LRUCache<Integer, Integer> cache = new LRUCache<>(2);
        cache.put(1, 1);
        cache.put(2, 2);
        System.out.println(cache.get(2));
        cache.put(3, 3);
        System.out.println(cache.get(1));
        System.out.println(cache.get(2));
        System.out.println(cache.get(3));
    }
}

 

Office 2019增强版转VOL并激活脚本

自从Mac book pro坏了之后我又重新windows阵营,windows系统给人的感觉就是皮实耐操软件生态丰富。windows10系统对开发者也非常的友好,再不济我们可以使用虚拟机。今天在这儿分享一个Office 2019增强版转VOL并激活的脚本。

Office 2019增强版转VOL并激活脚本

:: 修复乱码问题
CHCP 65001
title office2019 retail转换vol版
:: 判断安装目录
if exist "%ProgramFiles%\Microsoft Office\Office16\ospp.vbs" cd /d "%ProgramFiles%\Microsoft Office\Office16"
if exist "%ProgramFiles(x86)%\Microsoft Office\Office16\ospp.vbs" cd /d "%ProgramFiles(x86)%\Microsoft Office\Office16"

cls

echo 正在重置Office2019零售激活...

cscript ospp.vbs /rearm

echo 正在安装 KMS 许可证...
for /f %%x in ('dir /b ..\root\Licenses16\ProPlus2019VL_kms*.xrm-ms') do cscript ospp.vbs /inslic:"..\root\Licenses16\%%x" >nul

echo 正在安装 MAK 许可证...
for /f %%x in ('dir /b ..\root\Licenses16\ProPlus2019VL_mak*.xrm-ms') do cscript ospp.vbs /inslic:"..\root\Licenses16\%%x" >nul

echo 正在安装 KMS 密钥...
cscript ospp.vbs /inpkey:NMMKJ-6RK4F-KMJVX-8D9MJ-6MWKP

echo 正在设置 KMS 服务器...
cscript ospp.vbs /sethst:kms.03k.org

echo 正在联系KMS服务器...
cscript ospp.vbs /act

echo 转化完成,按任意键退出!

pause >nul

exit

这个脚本中有使用第三方的激活服务器kms.03k.org,如果这个服务器不能激活你的电脑或者Office你也可以使用老徐搭的激活服务器xuchengen.cn

生产环境Java项目CPU飙升100%排查

Java线上项目CPU飙升100%排查步骤

# 使用TOP命令查看进程ID
top
 
# 使用top命令查看线程ID
top -H -p {进程ID}

# 转换线程ID为16进制便于后续搜索
printf '%x' {线程ID}

# 使用jstack命令导出文件
jstack {进程ID} > {项目名称_进程ID}.txt

# 最后使用16进制线程ID去txt文件搜索进行进一步分析排查

 

解决Mac高版本系统不能安装JDK6的问题

公司部分业务系统依然使用的JDK6,然后便有了下文。

苹果官方JDK6下载地址:https://support.apple.com/kb/dl1572?locale=zh_CN

苹果高版本Mac系统已经不允许直接运行dmg包安装JDK6,以下脚本请用Apple script运行,然后双击桌面的pkg包安装。

set theDMG to choose file with prompt "Please select javaforosx.dmg:" of type {"dmg"}
do shell script "hdiutil mount " & quoted form of POSIX path of theDMG
do shell script "pkgutil --expand /Volumes/Java\\ for\\ macOS\\ 2017-001/JavaForOSX.pkg ~/tmp"
do shell script "hdiutil unmount /Volumes/Java\\ for\\ macOS\\ 2017-001/"
do shell script "sed -i '' 's/return false/return true/g' ~/tmp/Distribution"
do shell script "pkgutil --flatten ~/tmp ~/Desktop/Java.pkg"
do shell script "rm -rf ~/tmp"
display dialog "Modified Java.pkg saved on desktop" buttons {"Ok"}

 

Docker部署Redis

基于Docker快速部署Redis实例

创建macvlan网络

创建macvlan网络的好处是与你的宿主机处于同一个子网,子网内的任意主机都可以访问到docker容器。

docker network create -d macvlan --subnet=192.168.210.0/24 --gateway=192.168.210.254 -o parent=eth0 docker_macvlan

创建Redis实例

docker run -itd \
 --name redis \
 --hostname redis \
 -v $PWD/redis/data:/data \
 -v $PWD/redis/conf:/etc/redis \
 -v /etc/localtime:/etc/localtime \
 --net docker_macvlan \
 --ip 192.168.210.40 \
 --privileged=true \
 redis:latest /bin/bash -c "echo 65535 > /proc/sys/net/core/somaxconn; redis-server /etc/redis/redis.conf"

下载Redis配置

wget https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf -O /var/lib/docker/volumes/redis/conf/redis.conf

适当的修改配置以满足自己的需求,最后重启redis实例即可。

Docker部署MySQL

基于Docker快速部署MySQL实例

创建macvlan网络

创建macvlan网络的好处是与你的宿主机处于同一个子网,子网内的任意主机都可以访问到docker容器。

docker network create -d macvlan --subnet=192.168.210.0/24 --gateway=192.168.210.254 -o parent=eth0 docker_macvlan

创建MySQL实例

docker run -itd \
 --name mysql \
 --hostname mysql \
 -v $PWD/mysql/conf:/etc/mysql/conf.d \
 -v $PWD/mysql/data:/var/lib/mysql \
 -e TZ=Asia/Shanghai \
 -e MYSQL_ROOT_PASSWORD=123456 \
 --net docker_macvlan \
 --ip 192.168.210.40 \
 mysql:5.7 \
 --character-set-server=utf8mb4 \
 --collation-server=utf8mb4_unicode_ci

 

基于ShardingShphere-JDBC实现读写分离

基于ShardingShphere-JDBC读写分离的样版工程。通过该工程快速了解ShardingShphere-JDBC框架。

Github地址

项目框架

  • Spring Boot
  • Mybatis
  • tkMapper
  • PageHelper
  • HikariCP
  • MySQL
  • knife4j
  • ShardingShphere-JDBC

基础设施搭建

MySQL数据库一个master节点一个slave节点均部署在Docker容器中,服务器使用CentOS7

安装Docker

yum -y install docker

启动Docker服务

systemctl enable docker.service 
systemctl start docker.service

创建Docker macvlan网络

docker network create -d macvlan --subnet=<局域网网段> --gateway=<网关地址> -o parent=<物理网卡名称> <给该网络取一个漂亮的名称>

举个例子

docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.210.0.1 -o parent=eth0 macvlan_net

部署MySQL Master节点

$PWD参数表示你当前命令行所处的路径,那我们cd到Docker的volumes路径下,来部署我们的master节点。

cd /var/lib/docker/volumes
docker run -itd \
 --name mysql-01 \
 --hostname mysql-01 \
 -v $PWD/mysql_01/conf:/etc/mysql/conf.d \
 -v $PWD/mysql_01/data:/var/lib/mysql \
 -e TZ=Asia/Shanghai \
 -e MYSQL_ROOT_PASSWORD=123456 \
 --net macvlan_net \
 --ip 192.168.0.40 \
 mysql:5.7 \
 --character-set-server=utf8mb4 \
 --collation-server=utf8mb4_unicode_ci

配置MySQL Master节点

进入$PWD/mysql_01/conf目录,创建或修改my.cnf文件。

vim /var/lib/docker/volumes/mysql_01/conf/my.cnf
[mysqld]
server-id=1
log-bin=master-bin
binlog-format=row
binlog-ignore-db=information_schema
binlog-ignore-db=mysql
binlog-ignore-db=performance_schema
binlog-ignore-db=sys
expire_logs_days=7

部署MySQL Slave节点

$PWD参数表示你当前命令行所处的路径,那我们cd到Docker的volumes路径下,来部署我们的slave节点。

cd /var/lib/docker/volumes
docker run -itd \
 --name mysql-02 \
 --hostname mysql-02 \
 -v $PWD/mysql_02/conf:/etc/mysql/conf.d \
 -v $PWD/mysql_02/data:/var/lib/mysql \
 -e TZ=Asia/Shanghai \
 -e MYSQL_ROOT_PASSWORD=123456 \
 --net macvlan_net \
 --ip 192.168.0.41 \
 mysql:5.7 \
 --character-set-server=utf8mb4 \
 --collation-server=utf8mb4_unicode_ci

配置MySQL Slave节点

进入$PWD/mysql_02/conf目录,创建或修改my.cnf文件。

vim /var/lib/docker/volumes/mysql_02/conf/my.cnf
[mysqld]
server-id=2
read-only=1

配置主从复制

使用mysql命令行工具连接master节点

mysql -uroot -p123456 -h'192.168.0.40'

查看当前master节点状态信息

show master status;

master节点状态信息回显

mysql&gt; show master status;
+-------------------+----------+--------------+-------------------------------------------------+-------------------+
| File              | Position | Binlog_Do_DB | Binlog_Ignore_DB                                | Executed_Gtid_Set |
+-------------------+----------+--------------+-------------------------------------------------+-------------------+
| master-bin.000001 |   154    |              | information_schema,mysql,performance_schema,sys |                   |
+-------------------+----------+--------------+-------------------------------------------------+-------------------+
1 row in set (0.00 sec)

使用mysql命令行工具连接slave节点

mysql -uroot -p123456 -h'192.168.0.40'

配置主从复制命令

  • master_log_filemaster节点回显信息中的File列的值。
  • master_log_posmaster节点回显信息中的Position列的值。
change master to
master_host='192.168.0.40',
master_user='root',
master_password='123456',
master_log_file='master-bin.000001',
master_log_pos=154;

启动主从复制命令

start slave;

查看当前slave节点状态信息

show slave status \G;

slave节点状态信息回显

mysql&gt; show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.40
                  Master_User: root
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master-bin.000001
          Read_Master_Log_Pos: 116304
               Relay_Log_File: mysql-02-relay-bin.000002
                Relay_Log_Pos: 113214
        Relay_Master_Log_File: master-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 116304
              Relay_Log_Space: 113424
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 36902060-410b-11ec-b5e5-0242c0a8d228
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
1 row in set (0.00 sec)

关于SharindShphere-JDBC框架主键回显报错问题

本项目数据访问层使用tkMapper框架,主键自增在字段上使用@GeneratedValue(generator = "JDBC")注解, 当调用tkMapper自带的insertSelective方法会导致空指针异常。

异常堆栈

java.lang.NullPointerException: ResultSet should call next or has no more data.
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897) ~[guava-29.0-jre.jar:na]
    at org.apache.shardingsphere.driver.jdbc.core.resultset.GeneratedKeysResultSet.checkStateForGetData(GeneratedKeysResultSet.java:243) ~[shardingsphere-jdbc-core-5.0.0.jar:5.0.0]
    at org.apache.shardingsphere.driver.jdbc.core.resultset.GeneratedKeysResultSet.getLong(GeneratedKeysResultSet.java:142) ~[shardingsphere-jdbc-core-5.0.0.jar:5.0.0]
    at org.apache.ibatis.type.LongTypeHandler.getNullableResult(LongTypeHandler.java:44) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.type.LongTypeHandler.getNullableResult(LongTypeHandler.java:26) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.type.BaseTypeHandler.getResult(BaseTypeHandler.java:94) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator$KeyAssigner.assign(Jdbc3KeyGenerator.java:270) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator.lambda$assignKeysToParam$0(Jdbc3KeyGenerator.java:124) ~[mybatis-3.5.7.jar:3.5.7]
    at java.util.ArrayList.forEach(ArrayList.java:1259) ~[na:1.8.0_301]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator.assignKeysToParam(Jdbc3KeyGenerator.java:124) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator.assignKeys(Jdbc3KeyGenerator.java:104) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator.processBatch(Jdbc3KeyGenerator.java:85) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator.processAfter(Jdbc3KeyGenerator.java:71) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:51) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:74) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:50) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:117) ~[mybatis-3.5.7.jar:3.5.7]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_301]
    at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:64) ~[mybatis-3.5.7.jar:3.5.7]
    at com.sun.proxy.$Proxy278.update(Unknown Source) ~[na:na]
    at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:194) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.session.defaults.DefaultSqlSession.insert(DefaultSqlSession.java:181) ~[mybatis-3.5.7.jar:3.5.7]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_301]
    at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:427) ~[mybatis-spring-2.0.6.jar:2.0.6]
    at com.sun.proxy.$Proxy230.insert(Unknown Source) ~[na:na]
    at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:272) ~[mybatis-spring-2.0.6.jar:2.0.6]
    at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:62) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.binding.MapperProxy$PlainMethodInvoker.invoke(MapperProxy.java:145) ~[mybatis-3.5.7.jar:3.5.7]
    at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:86) ~[mybatis-3.5.7.jar:3.5.7]
    at com.sun.proxy.$Proxy236.insertSelective(Unknown Source) ~[na:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_301]
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139) ~[spring-tx-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at com.sun.proxy.$Proxy238.insertSelective(Unknown Source) ~[na:na]
    at com.github.xuchengen.rws.biz.UserService.createUser(UserService.java:19) ~[classes/:na]
    at com.github.xuchengen.rws.biz.UserService$$FastClassBySpringCGLIB$$49f8d808.invoke(&lt;generated&gt;) ~[classes/:na]
    at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:687) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at com.github.xuchengen.rws.biz.UserService$$EnhancerBySpringCGLIB$$488bcf4f.createUser(&lt;generated&gt;) ~[classes/:na]
    at com.github.xuchengen.rws.web.UserController.createUser(UserController.java:43) ~[classes/:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_301]
    at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:879) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.5.RELEASE.jar:5.2.5.RELEASE]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:747) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1594) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_301]
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.33.jar:9.0.33]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_301]

异常分析

当我们调用tkMapper框架自带的insertSelective方法时生成的SQL语句如下:

INSERT INTO `t_user` ( `id`,`name`,`phone` ) VALUES( ?,?,? )

PreparedStatement绑定参数如下:

Parameters: null, 凡尔赛(String), 17811111113(String)

tkMapper框架对insertSelective方法的官方解释:

保存一个实体,null的属性不会保存,会使用数据库默认值,无默认值则使用null

问题就出在这里,自增主键的null会当作绑定参数传递。ShardingShphere框架在底层处理时将该null值作为插入的主键值暂存, 执行回填逻辑时将该值作为可回填主键的值最终导致了异常。

重点关注的方法

public InsertStatementContext(final Map&lt;String, ShardingSphereMetaData&gt; metaDataMap, final List&lt;Object&gt; parameters, final InsertStatement sqlStatement, final String defaultSchemaName) {
    super(sqlStatement);
    AtomicInteger parametersOffset = new AtomicInteger(0);
    insertValueContexts = getInsertValueContexts(parameters, parametersOffset);
    insertSelectContext = getInsertSelectContext(metaDataMap, parameters, parametersOffset, defaultSchemaName).orElse(null);
    onDuplicateKeyUpdateValueContext = getOnDuplicateKeyUpdateValueContext(parameters, parametersOffset).orElse(null);
    tablesContext = new TablesContext(getAllSimpleTableSegments());
    ShardingSphereSchema schema = getSchema(metaDataMap, defaultSchemaName);
    List&lt;String&gt; insertColumnNames = getInsertColumnNames();
    columnNames = useDefaultColumns() ? schema.getAllColumnNames(sqlStatement.getTable().getTableName().getIdentifier().getValue()) : insertColumnNames;
    generatedKeyContext = new GeneratedKeyContextEngine(sqlStatement, schema).createGenerateKeyContext(insertColumnNames, getAllValueExpressions(sqlStatement), parameters).orElse(null);
    this.schemaName = defaultSchemaName;
}
public Optional&lt;GeneratedKeyContext&gt; createGenerateKeyContext(final List&lt;String&gt; insertColumnNames, final List&lt;List&lt;ExpressionSegment&gt;&gt; valueExpressions, final List&lt;Object&gt; parameters) {
    String tableName = insertStatement.getTable().getTableName().getIdentifier().getValue();
    return findGenerateKeyColumn(tableName).map(optional -&gt; containsGenerateKey(insertColumnNames, optional)
            ? findGeneratedKey(insertColumnNames, valueExpressions, parameters, optional) : new GeneratedKeyContext(optional, true));
}
private GeneratedKeyContext findGeneratedKey(final List&lt;String&gt; insertColumnNames, final List&lt;List&lt;ExpressionSegment&gt;&gt; valueExpressions, 
                                                 final List&lt;Object&gt; parameters, final String generateKeyColumnName) {
    GeneratedKeyContext result = new GeneratedKeyContext(generateKeyColumnName, false);
    for (ExpressionSegment each : findGenerateKeyExpressions(insertColumnNames, valueExpressions, generateKeyColumnName)) {
        if (each instanceof ParameterMarkerExpressionSegment) {
            result.getGeneratedValues().add((Comparable&lt;?&gt;) parameters.get(((ParameterMarkerExpressionSegment) each).getParameterMarkerIndex()));
        } else if (each instanceof LiteralExpressionSegment) {
            result.getGeneratedValues().add((Comparable&lt;?&gt;) ((LiteralExpressionSegment) each).getLiterals());
        }
    }
    return result;
}