精华内容
下载资源
问答
  • 线程池监控
    2020-11-26 19:40:34

    线程池 - 线程池监控

    在使用线程池时出现问题,就需要定位处理,那么做个简单的线程池的监控还是必要的,起码可以在确认是线程池造成异常的时候查看下线程池的状态。

    我在实际环境中遇到过类似问题,通知阻塞,造成业务延时,甚至部分通知直接未发送成功。学习并作简单记录。


    目录

    线程池 - 线程池监控

    一、线程池的监控参数

    二、代码

    相关记录


    一、线程池的监控参数

    1、activeCount   线程池中正在执行任务的线程数量

    2、poolSize   线程池当前的线程数量

    3、queueSize   还剩多少个任务未执行

    4、largestPoolSize 线程池曾经创建过的最大线程数量。通过这个数据可以知道线程池是否满过,也就是达到了maximumPoolSize

    5、corePoolSize   线程池的核心线程数量

    6、completedTaskCount  线程池已完成的任务数量,该值小于等于taskCount

    7、maximumPoolSize  线程池的最大线程数量

    8、taskCount  线程池已经执行的和未执行的任务总数


    二、代码

     

    package com.yuantiaokj.controller.internal;
    
    import com.yuantiaokj.commonmodule.base.SysRes;
    import io.swagger.annotations.Api;
    import io.swagger.annotations.ApiOperation;
    import lombok.extern.slf4j.Slf4j;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
    import org.springframework.web.bind.annotation.PostMapping;
    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.ResponseBody;
    import org.springframework.web.bind.annotation.RestController;
    
    import java.util.HashMap;
    import java.util.Map;
    import java.util.concurrent.Executor;
    import java.util.concurrent.ThreadPoolExecutor;
    
    /**
     * ************************************************************
     * Copyright © 2020 cnzz Inc.All rights reserved.  *    **
     * ************************************************************
     *
     * @program: financial_eco-environment_cloud
     * @description: 线程池监控
     * @author: cnzz
     * @create: 2020-11-25 08:55
     **/
    @RestController
    @Slf4j
    @Api(tags = "线程池监控")
    @RequestMapping("/TaskExecutePoolMonitorController")
    public class TaskExecutePoolMonitorController {
    
        @Autowired
        private Executor taskExecutePoolCnzz;
    
        @ResponseBody
        @PostMapping("/taskExecutePoolMonitor")
        @ApiOperation("线程池监控")
        public SysRes taskExecutePoolMonitor() {
            log.info("TaskExecutePoolMonitor|线程池监控");
            ThreadPoolTaskExecutor threadTask = (ThreadPoolTaskExecutor) taskExecutePoolCnzz;
            ThreadPoolExecutor excutor = threadTask.getThreadPoolExecutor();
    
            Map<String, Object> resultMap = getStringObjectMap(excutor);
    
            return SysRes.success(resultMap);
        }
    
    
    
    
        private Map<String, Object> getStringObjectMap(ThreadPoolExecutor excutor) {
            Map<String,Object> resultMap=new HashMap<>();
    
            //线程池中正在执行任务的线程数量
            int activeCount = excutor.getActiveCount();
            resultMap.put("activeCount",activeCount);
            log.info("{}|TaskExecutePoolMonitor|线程池中正在执行任务的线程数量", activeCount);
    
            //线程池已完成的任务数量,该值小于等于taskCount
            long completedTaskCount = excutor.getCompletedTaskCount();
            resultMap.put("completedTaskCount",completedTaskCount);
            log.info("{}|TaskExecutePoolMonitor|线程池已完成的任务数量,该值小于等于taskCount", completedTaskCount);
    
            //线程池的核心线程数量
            int corePoolSize = excutor.getCorePoolSize();
            resultMap.put("corePoolSize",corePoolSize);
            log.info("{}|TaskExecutePoolMonitor|线程池的核心线程数量", corePoolSize);
    
            //线程池曾经创建过的最大线程数量。通过这个数据可以知道线程池是否满过,也就是达到了maximumPoolSize
            int largestPoolSize = excutor.getLargestPoolSize();
            resultMap.put("largestPoolSize",largestPoolSize);
            log.info("{}|TaskExecutePoolMonitor|线程池曾经创建过的最大线程数量。通过这个数据可以知道线程池是否满过,也就是达到了maximumPoolSize", largestPoolSize);
    
            //线程池的最大线程数量
            int maximumPoolSize = excutor.getMaximumPoolSize();
            resultMap.put("maximumPoolSize",maximumPoolSize);
            log.info("{}|TaskExecutePoolMonitor|线程池的最大线程数量", maximumPoolSize);
    
            //线程池当前的线程数量
            int poolSize = excutor.getPoolSize();
            resultMap.put("poolSize",poolSize);
            log.info("{}|TaskExecutePoolMonitor|线程池当前的线程数量", poolSize);
    
            //线程池已经执行的和未执行的任务总数
            long taskCount = excutor.getTaskCount();
            resultMap.put("taskCount",taskCount);
            log.info("{}|TaskExecutePoolMonitor|线程池已经执行的和未执行的任务总数", taskCount);
    
            //还剩多少个任务未执行
            int queueSize = excutor.getQueue().size();
            resultMap.put("queueSize",queueSize);
            log.info("{}|TaskExecutePoolMonitor|还剩多少个任务未执行", queueSize);
            return resultMap;
        }
    }

    相关记录

    自定义线程池     https://blog.csdn.net/xinpz/article/details/110139747

    线程池参数及配置    https://blog.csdn.net/xinpz/article/details/110132365

    更多相关内容
  • 线程池监控方案

    千次阅读 2021-04-20 19:34:06
    如何监控线程池?可以看下这篇文章。

    5ycode

    5ycode

    某信贷cto,专注于java技术研究与应用,包括JVM、DDD、软件设计、源码阅读、以及经验分享

    9篇原创内容

    公众号

    读了Java线程池实现原理及其在美团业务中的实践 后,我就想一个问题,如果让我去做这个线程池的监控,我该怎么做?

    要对线程池进行监控,首先得明白,我们监控线程池的目的是什么?

    监控是为了防患于未然,防止生产事故的发生。或者能在未发生时就进行入状态。

    出问题线程池的现象:

    • 线程池异步处理,消费速度过慢,导致任务积压,响应过慢,或者队列有限,导致提交被拒绝;

    • 使用线程池做并行请求的时候,请求量过大,处理积压,导致响应变慢;

    • 业务评估不准确,导致线程池资源设置的合理;

    对线程池监控的指标有以下几种:

    1,队列饱和度;

    2,单位时间内提交任务的速度远大于消费速度;

    监控方案:

    方案一:继承ThreadPoolExecutor对部分方法进行重写

    /\*\*
     \* 创建可监控的线程池
     \* @author yxkong
     \* @version 1.0
     \* @date 2021/3/22 13:29
     \*/
    public class ThreadPoolExecutorMonitor  extends ThreadPoolExecutor {
    
        public ThreadPoolExecutorMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
        }
    
        public ThreadPoolExecutorMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory);
        }
    
        public ThreadPoolExecutorMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, RejectedExecutionHandler handler) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, handler);
        }
    
        public ThreadPoolExecutorMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
        }
    
        @Override
        public void shutdown() {
            //获取执行任务
            this.getCompletedTaskCount();
            //获取正在运行的线程数
            this.getActiveCount();
            //获取任务数
            this.getTaskCount();
            //队列剩余个数
            this.getQueue().size();
            super.shutdown();
        }
    
        @Override
        public List<Runnable> shutdownNow() {
            return super.shutdownNow();
        }
    
        @Override
        protected void beforeExecute(Thread t, Runnable r) {
            super.beforeExecute(t, r);
        }
    
        @Override
        protected void afterExecute(Runnable r, Throwable t) {
            super.afterExecute(r, t);
            if (t == null && r instanceof Future<?>) {
                try {
                    //获取线程执行结果
                    Object result = ((Future<?>) r).get();
                } catch (CancellationException ce) {
                    t = ce;
                } catch (ExecutionException ee) {
                    t = ee.getCause();
                } catch (InterruptedException ie) {
                    Thread.currentThread().interrupt(); // ignore/reset
                }
            }
            if (t != null) {
                //处理异常
                System.out.println(t);
            }
            //记录线程执行时间
        }
    }
    

    方案二:自定义ThreadFactory、BlockingQueue、RejectedExecutionHandler

    • ThreadFactory:是了为了线程的命名,方便统一管理;
    • BlockingQueue:是为能动态调整队列的长度(数组扩缩容时,需要考虑锁以及性能,链表不用考虑)
    • RejectedExecutionHandler: 队列满了如何处理(可以动态扩容,小心把jvm撑爆,或者无法创建队列)
    public class NamedThreadFactory implements ThreadFactory, Serializable {
        private static final AtomicInteger poolNumber = new AtomicInteger(1);
        private final ThreadGroup group;
        private final AtomicInteger threadNumber = new AtomicInteger(1);
        private final String namePrefix;
    
        public NamedThreadFactory(String name) {
            SecurityManager s = System.getSecurityManager();
            group = (s != null) ? s.getThreadGroup() :
                    Thread.currentThread().getThreadGroup();
            namePrefix = name +poolNumber.getAndIncrement() +"-thread-";
        }
    
        @Override
        public Thread newThread(Runnable r) {
            Thread t = new Thread(group, r,namePrefix + threadNumber.getAndIncrement(), 0);
            if (t.isDaemon()){
                t.setDaemon(false);
            }
            if (t.getPriority() != Thread.NORM\_PRIORITY){
                t.setPriority(Thread.NORM\_PRIORITY);
            }
            return t;
        }
    }
    //自定义 LinkedBlockingQueue,将队列长度对外暴露可修改
    public class CustomLinkedBlockingQueue <E> extends AbstractQueue<E>
            implements BlockingQueue<E>, java.io.Serializable 
    }
    public class MyRejectPolicy implements RejectedExecutionHandler {
        @Override
        public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
            //自定义处理逻辑 ,比如监控报警,队列满了
        }
    }
    

    自定义线程池

      /\*\*
         \* 自定义业务线程池
         \* @return
         \*/
        @Bean("bizThreadPool")
        public ThreadPoolExecutor bizThreadPool(){
            return new ThreadPoolExecutor(5,
                    10,
                    200,
                    TimeUnit.SECONDS,
                    new LinkedBlockingQueue<>(10),
                    new NamedThreadFactory("bizThreadPool"));
        }
    
        /\*\*
         \* 自定义log线程池
         \* @return
         \*/
        @Bean("logThreadPool")
        public ThreadPoolExecutor logThreadPool(){
            return new ThreadPoolExecutor(5,
                    10,
                    200,
                    TimeUnit.SECONDS,
                    new CustomLinkedBlockingQueue<>(10),
                    new NamedThreadFactory("bizThreadPool"));
        
    

    针对线程池的监控以及动态调整

    @RestController
    @RequestMapping("/threadpool")
    @Slf4j
    public class ThreadPoolController {
    
        /\*\*
         \* 收集所有的线程池,线程池建议自己手动实现,不要用spring默认的
         \* 这里是偷懒了,用了spring的特性,如果是java项目,实现后自己注册
         \*/
    
        @Autowired
        public Map<String, ThreadPoolExecutor> map;
    
        /\*\*
         \* 获取所有的线程池
         \* @return
         \*/
        @GetMapping("/list")
        public ResultBean<Map<String,ThreadPoolExecutor>> list(){
            return ResultBeanUtil.success("获取所有线程池成功!",map);
        }
        @GetMapping("/get")
        public ResultBean<ThreadPoolExecutor> getThreadPool(String threadPool){
            ThreadPoolExecutor executor = map.get(threadPool);
            if(executor == null){
                return ResultBeanUtil.noData("未找到对应的线程池");
            }
            return ResultBeanUtil.success("获取线程池成功!",executor);
        }
        @PostMapping("/modify")
        public ResultBean<ThreadPoolExecutor> modifyThreadPool(String threadPool,Integer coreSize,Integer maximumPoolSize,Integer capacity){
            ThreadPoolExecutor executor = map.get(threadPool);
            if(executor == null){
                return ResultBeanUtil.noData("未找到对应的线程池");
            }
            executor.setCorePoolSize(coreSize);
            executor.setMaximumPoolSize(maximumPoolSize);
            //启动所有的核心线程数,getTask中不会根据核心线程数修改workers,如果再有新线程,会动态调整
            executor.prestartAllCoreThreads();
            //如果将线程池改小,设置下,默认核心线程数是不会回收的
            executor.allowCoreThreadTimeOut(true);
            BlockingQueue<Runnable> queue = executor.getQueue();
            if(queue instanceof CustomLinkedBlockingQueue){
                CustomLinkedBlockingQueue customQueue = (CustomLinkedBlockingQueue) queue;
                customQueue.setCapacity(capacity);
            }
            return ResultBeanUtil.success("获取线程池成功!",executor);
        }
        @PostMapping("test")
        public ResultBean<Void> test(String threadPool,Integer size){
            if (size == null || size ==0){
                return ResultBeanUtil.paramEmpty("size不能为空");
            }
            ThreadPoolExecutor executor = map.get(threadPool);
            if(executor == null){
                return ResultBeanUtil.noData("未找到对应的线程池");
            }
            for (int i = 0; i < size; i++) {
                int finalI = i;
                executor.submit(new Runnable() {
                    @Override
                    public void run() {
                        log.info("任务{}执行",Integer.valueOf(finalI));
                    }
                });
            }
            return ResultBeanUtil.success();
        }
    }
    

    方案三:通过agent进行监控,并对外暴露http服务

    这里需要注意几点:

    1,ThreadPoolExecutor 是由Bootstrap ClassLoader加载,承载的线程池的类必须也是Bootstrap ClassLoader 加载,否则会出现找不到类定义的问题;

    2,如果是实现ThreadPoolExecutor自定义的的Executor类,不需要考虑类加载的问题;

    问题一的解决方案:

    1,使用-Xbootclasspath/a: …/a.jar 让承载容器由Bootstrap ClassLoader加载;

    2,使用byte-buddy 增强某个类,强制让Bootstrap ClassLoader加载

     /\*\*
         \* 针对threadPoolExecutor 的增强
         \* @param instrumentation
         \*/
        private static void threadPoolExecutor(Instrumentation instrumentation){
            new AgentBuilder.Default()
                    .disableClassFormatChanges()
                    //默认是不对bootstrap类加载器加载的对象instrumentation,忽略某个type后,就可以了
                    .ignore(ElementMatchers.noneOf(ThreadPoolExecutor.class))
                    //
                    .with(AgentBuilder.InitializationStrategy.NoOp.INSTANCE)
                    //
                    .with(AgentBuilder.RedefinitionStrategy.REDEFINITION)
                    .with(AgentBuilder.TypeStrategy.Default.REDEFINE)
                    .with(AgentBuilder.InjectionStrategy.UsingUnsafe.INSTANCE)
                    .type(ElementMatchers.is(ThreadPoolExecutor.class))
                    //.or(ElementMatchers.hasSuperType(ElementMatchers.named("java.util.concurrent.Executor")))
                    //.or(ElementMatchers.hasSuperType(ElementMatchers.named("java.util.concurrent.ExecutorService")))
                    .transform((builder, typeDescription, classLoader, javaModule) ->
                            builder.visit(Advice.to(ThreadPoolExecutorFinalizeAdvice.class).on(ElementMatchers.named("finalize")))
                                    .visit(Advice.to(ThreadPoolExecutorExecuteAdvice.class).on(ElementMatchers.named("execute")))
                    )
                    .installOn(instrumentation);
        }
    

    暴露一个统一的接口,不需要各项目去实现。

    public class MonitorTest {
    
        @Test
        public void test(){
            System.out.println(ThreadPoolMonitorData.class.getClassLoader());
            System.out.println(ThreadPoolMonitorData.alls());
            System.out.println(ThreadPoolMonitor.class.getClassLoader());
            ThreadPoolExecutor pool= threadpool();
            pool.submit(()->{
                System.out.println("线程池pool执行中1:"+Thread.currentThread().getName());
            });
            pool.submit(()->{
                System.out.println("线程池pool执行中2:"+Thread.currentThread().getName());
            });
            pool.submit(()->{
                System.out.println("线程池pool执行中3:"+Thread.currentThread().getName());
            });
    
            ExecutorService executorService =  threadpool1();
            executorService.submit(()->{
                System.out.println("线程池executorService执行中1:"+Thread.currentThread().getName());
            });
            ThreadPoolMonitorData.alls().forEach((key,val) ->{
                System.out.println("ThreadPoolMonitorData key="+key+" val:"+val);
            });
    
            ThreadPoolMonitor monitor = new ThreadPoolMonitor();
            monitor.alls().forEach((key,val)->{
                System.out.println("ThreadPoolMonitor key="+key+" val:"+val);
            });
    
            try {
                Thread.sleep(3000);
            }catch (Exception e){
                e.printStackTrace();
            }
    
        }
    
    
        private ThreadPoolExecutor threadpool(){
            ThreadPoolExecutor pool =  new ThreadPoolExecutor(5,
                    10,
                    200,
                    TimeUnit.SECONDS,
                    new LinkedBlockingQueue<>(10));
            return pool;
        }
        private  ExecutorService threadpool1(){
            return Executors.newCachedThreadPool();
        }
    }
    public class ThreadPoolExecutorExecuteAdvice {
        /\*\*
         \* 对所有的线程的execute 进入方法进行监听
         \* byteBuddy不支持对constructor
         \* @Advice.OnMethodEnter 必须作用与static方法
         \* @param obj
         \* @param abc
         \*/
        @Advice.OnMethodEnter
        public static void executeBefore(@Advice.This Object obj,@Advice.Argument(0) Object abc){
           try{
               ThreadPoolExecutor executor = (ThreadPoolExecutor) obj;
               ThreadPoolMonitorData.add(executor.hashCode()+"",(ThreadPoolExecutor) obj);
           }catch (Exception e){
               e.printStackTrace();
           }
        }
    }
    
    null   BootstrapClassLoader 输出是null
    {}
    sun.misc.Launcher$AppClassLoader@18b4aac2
    线程池pool执行中1:pool-3-thread-1
    线程池pool执行中2:pool-3-thread-2
    线程池pool执行中3:pool-3-thread-3
    线程池executorService执行中1:pool-4-thread-1
    ThreadPoolMonitorData key=1564698139 val:java.util.concurrent.ThreadPoolExecutor@5d43661b\[Running, pool size = 3, active threads = 0, queued tasks = 0, completed tasks = 3\]
    ThreadPoolMonitorData key=171421438 val:java.util.concurrent.ThreadPoolExecutor@a37aefe\[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 1\]
    
    

    监控获取到的数据,需要在某个地方进行统一采集。

    建议的方案是:统一标准 以及 agent采集,根据实际情况采集需要的数据进行监控以及动态调整。
    具体代码实现,请看:
    线程池监控-bytebuddy-agent模式

    在这里插入图片描述

    展开全文
  • 基于Spring Boot的线程池监控方案

    千次阅读 2022-03-13 21:43:05
    这篇是推动大家异步编程的思想的线程池的准备篇,要做好监控,让大家使用无后顾之忧,敬畏生产。 为什么需要对线程池进行监控 Java线程池作为最常使用到的并发工具,相信大家都不陌生,但是你真的确定使用对了吗?...

    前言

    这篇是推动大家异步编程的思想的线程池的准备篇,要做好监控,让大家使用无后顾之忧,敬畏生产。

    为什么需要对线程池进行监控

    Java线程池作为最常使用到的并发工具,相信大家都不陌生,但是你真的确定使用对了吗?大名鼎鼎的阿里Java代码规范要求我们不使用 Executors来快速创建线程池,但是抛弃Executors,使用其它方式创建线程池就一定不会出现问题吗?本质上对于我们来说线程池本身的运行过程是一个黑盒,我们没办法了解线程池中的运行状态时,出现问题没有办法及时判断和预警。面对这种黑盒操作必须通过监控方式让其透明化,这样对我们来说才能更好的使用好线程池。因此必须对线程池做监控。

    image.png

    如何做线程池的监控

    对于如何做监控,本质就是涉及三点,分别是数据采集、数据存储以及大盘的展示,接下来我们分说下这三点;

    数据采集

    采集什么数据,对于我们来说需要采集就是黑盒的数据,什么又是线程池的黑盒数据,其实也就是整个线程处理的整个流程,在整个流程中,我们可以通过ThreadPoolExecutor中的七个方法获取数据,通过这七个方法采集到的数据就可以使线程池的执行过程透明化。

    1. getCorePoolSize():获取核心线程数;
    2. getMaximumPoolSize:获取最大线程数;
    3. getQueue():获取线程池中的阻塞队列,并通过阻塞队列中的方法获取队列长度、元素个数等;
    4. getPoolSize():获取线程池中的工作线程数(包括核心线程和非核心线程);
    5. getActiveCount():获取活跃线程数,也就是正在执行任务的线程;
    6. getLargestPoolSize():获取线程池曾经到过的最大工作线程数;
    7. getTaskCount():获取历史已完成以及正在执行的总的任务数量;

    除了我们了解的这些流程以外,ThreadPoolExecutor中还提供了三种钩子函数,

    1. beforeExecute():Worker线程执行任务之前会调用的方法;
    2. afterExecute():在Worker线程执行任务之后会调用的方法;
    3. terminated():当线程池从运行状态变更到TERMINATED状态之前调用的方法;

    对于beforeExecute和afterExecute可以理解为使用Aop监听线程执行的时间,这样子我们可以对每个线程运行的时间整体做监控,terminated可以理解为线程关闭时候的监控,这样我们就可以整体获取采集到线程池生命周期的所有数据了。

    数据存储以及大盘的展示

    对于存储我们这个比较适合采用时序性数据库,此外现在很多成熟的监控产品都可以满足我们大屏展示的诉求,这里推荐下美团Cat和Prometheus,这里不展开进行讲解,大家可以根据自己公司的监控产品进行选择,对于不同的方案采取的存储形式会有些差异,甚至自己都可以自定义实现一个功能,反正难度不大。

    进一步扩展以及思考

    在实际的项目开发中我们会遇到以下场景:

    1. 不同的业务采用同一个线程池,这样如果某个服务阻塞,会影响到整体共用线程池的所有服务,会触发线程池的拒绝策略;
    2. 流量突然增加,需要动态调整线程池的参数,这个时候又不能重启;

    针对这两种场景,我们对线程池再次进行了深入的思考:

    1. 如何合理配置线程池参数;
    2. 如何动态调整线程池参数;
    3. 如何给不同的服务之间做线程池的隔离;

    如何合理配置线程池参数

    关于这个问题面试的时候也是经常被问到,我只能说这个问题开始就是一个坑,针对与CPU密集型和I/O密集型,线程池的参数是有不同设计的,也不是遵守几个公式就可以搞定,当然可以参考,我认为对于线程池合理的参数的配置是经过多次调整得到的,甚至增加和减少业务都会影响一些参数,我不太建议大家每天背书式的CPU密集型就是N+1,非CPU密集型就是2N,因此我们更希望看到线程池动态配置。

    如何动态调整线程池参数

    关于如何动态调整线程池,还是回到我们场景问题的解决上,对于流量突增核心就是提升线程池的处理速度,那如何提升线程池的处理速度,有两种方式,一种是加快业务的处理,也就是消费的快,显然这种在运行的业务中我们想改变还是比较困难,这个可以作为复盘的重点;还有一种就是增加消费者,增加消费者的重点就是调整核心线程数以及非核心线程数的数量。

    img

    居于这种思考,这个时候我们需要看下ThreadPoolExecutor线程池源码,首先看下开始定义的变量,通过变量的设计我们就会发现大师就是大师,大师通过AtomicInteger修饰的ctl变量,高3位存储了线程池的状态,低29存储线程的个数,通过一个变量完成两件事情,完成状态判断以及限制线程最大个数。使用一个HashSet存储Worker的引用,而Worker继承了AbstractQueuedSynchronizer,实现一个一个不可冲入的独占锁保证线程的安全性。

    img

    //用来标记线程池状态(高3位),线程个数(低29位)     
    private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
    //工作状态存储在高3位中
    private static final int COUNT_BITS = Integer.SIZE - 3;
    //线程个数所能表达的最大数值
    private static final int CAPACITY   = (1 << COUNT_BITS) - 1;
    //线程池状态
    //RUNNING -1 能够接收新任务,也可以处理阻塞队列中的任务
    private static final int RUNNING    = -1 << COUNT_BITS;
    //SHUTDOWN 0 不可以接受新任务,继续处理阻塞队列中的任务
    private static final int SHUTDOWN   =  0 << COUNT_BITS;
    //STOP 1 不接收新任务,不处理阻塞队列中的任务,并且会中断正在处理的任务
    private static final int STOP       =  1 << COUNT_BITS;
    //TIDYING 2 所有任务已经中止,且工作线程数量为0,最后变迁到这个状态的线程将要执行terminated()钩子方法,只会有一个线程执行这个方法;
    private static final int TIDYING    =  2 << COUNT_BITS;
    //TERMINATED 3 中止状态,已经执行完terminated()钩子方法
    private static final int TERMINATED =  3 << COUNT_BITS;
    //任务队列,当线程池中的线程达到核心线程数量时,再提交任务 就会直接提交到 workQueue
    private final BlockingQueue<Runnable> workQueue;
    //线程池全局锁,增加worker减少worker时需要持有mainLock,修改线程池运行状态时,也需要
    private final ReentrantLock mainLock = new ReentrantLock();
    //线程池中真正存放worker的地方。
    private final HashSet<Worker> workers = new HashSet<Worker>();
    private final Condition termination = mainLock.newCondition();
    //记录线程池生命周期内 线程数最大值
    private int largestPoolSize;
    //记录线程池所完成任务总数
    private long completedTaskCount;
    //创建线程会使用线程工厂
    private volatile ThreadFactory threadFactory;
    //拒绝策略
    private volatile RejectedExecutionHandler handler;
    //存活时间
    private volatile long keepAliveTime;
    //控制核心线程数量内的线程 是否可以被回收。true 可以,false不可以。
    private volatile boolean allowCoreThreadTimeOut;
    //核心线程池数量
    private volatile int corePoolSize;
    //线程池最大数量
    private volatile int maximumPoolSize;
    

    我们的重点看的是volatile修饰的corePoolSize、maximumPoolSize以及keepAliveTime,当然threadFactory和handler也可以看下,不过这两个不是我们解决动态调整线程池的关键。对于这些volatile修饰的关键的变量,从并发角度思考的,必然是有并发读写的操作才使用volatile修饰的,在指标采集中我们看到其get***的方法,对于写的操作我们可以猜测肯定提供了set***的方式,这个时候我们可以搜索下setCorePoolSize,果不其然我们真的搜索到了。

        public void setCorePoolSize(int corePoolSize) {
            if (corePoolSize < 0)
                throw new IllegalArgumentException();
            int delta = corePoolSize - this.corePoolSize;
            this.corePoolSize = corePoolSize;
            //新设置的corePoolSize小于当前核心线程数的时候
            //会调用interruptIdleWorkers方法来中断空闲的工作线程
            if (workerCountOf(ctl.get()) > corePoolSize)
                interruptIdleWorkers();
            else if (delta > 0) {
                //当设置的值大于当前值的时候核心线程数的时候
                //按照等待队列中的任务数量来创建新的工作线程
                int k = Math.min(delta, workQueue.size());
                while (k-- > 0 && addWorker(null, true)) {
                    if (workQueue.isEmpty())
                        break;
                }
            }
        }
    

    接下来我们看下interruptIdleWorkers的源码,此处源码使用ReentrantLock可重入锁,因为Worker的是通过一个全局的HashSer存储,这里通过ReentrantLock保证线程安全。

        private void interruptIdleWorkers(boolean onlyOne) {
            //可重入锁
            final ReentrantLock mainLock = this.mainLock;
            mainLock.lock();
            try {
                for (Worker w : workers) {
                    Thread t = w.thread;
                    if (!t.isInterrupted() && w.tryLock()) {
                        try {
                            //中断当前线程
                            t.interrupt();
                        } catch (SecurityException ignore) {
                        } finally {
                            w.unlock();
                        }
                    }
                    if (onlyOne)
                        break;
                }
            } finally {
                mainLock.unlock();
            }
        }
    

    接下来我们在验证一下是否存在其他相关的参数设置,如下:

        public void setMaximumPoolSize(int maximumPoolSize) {
            if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
                throw new IllegalArgumentException();
            this.maximumPoolSize = maximumPoolSize;
            if (workerCountOf(ctl.get()) > maximumPoolSize)
                interruptIdleWorkers();
        }
        public void setKeepAliveTime(long time, TimeUnit unit) {
            if (time < 0)
                throw new IllegalArgumentException();
            if (time == 0 && allowsCoreThreadTimeOut())
                throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
            long keepAliveTime = unit.toNanos(time);
            long delta = keepAliveTime - this.keepAliveTime;
            this.keepAliveTime = keepAliveTime;
            if (delta < 0)
                interruptIdleWorkers();
        }
        public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
            if (handler == null)
                throw new NullPointerException();
            this.handler = handler;
        }
    

    这里我们会发现一个问题BlockingQueue的队列容量不能修改,看到美团的文章提供的一个可修改的队列ResizableCapacityLinkedBlockingQueue,于是乎去看了一下LinkedBlockingQueue的源码,发现了关于capacity是一个final修饰的,这个时候我就思考一番,这个地方采用volatile修饰,对外暴露可修改,这样就实现了动态修改阻塞队列的大小。

    img

    如何给不同的服务之间做线程池的隔离

    img

    关于如何给不同服务之间做线程池的隔离,这里我们可以采用Hystrix的舱壁模式,也就是说针对不同服务类型的服务单独创建线程池,这样就可以实现服务之间不相互影响,不会因为某个服务导致整体的服务影响都阻塞。

    实现方案

    聊了这么多前置的知识储备,接下来我们来聊聊实现方案,整体的实现方案我们建立在Spring Boot的基础实现,采用Spring Cloud刷新动态配置,采用该方式比较合适单体应用,对于有Appllo和Nacos可以通过监听配置方式的来动态刷新。

    1. Maven依赖如下;
        <dependencies>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-web</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-context</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-test</artifactId>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.projectlombok</groupId>
                <artifactId>lombok</artifactId>
                <version>1.18.12</version>
            </dependency>
            <dependency>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-api</artifactId>
                <version>1.7.5</version>
            </dependency>
            <dependency>
                <groupId>ch.qos.logback</groupId>
                <artifactId>logback-core</artifactId>
                <version>1.2.3</version>
            </dependency>
            <dependency>
                <groupId>ch.qos.logback</groupId>
                <artifactId>logback-classic</artifactId>
                <version>1.2.3</version>
            </dependency>
    
        </dependencies>
    
        <dependencyManagement>
            <dependencies>
                <dependency>
                    <groupId>org.springframework.cloud</groupId>
                    <artifactId>spring-cloud-dependencies</artifactId>
                    <version>Hoxton.SR7</version>
                    <type>pom</type>
                    <scope>import</scope>
                </dependency>
            </dependencies>
        </dependencyManagement>
    
    1. 配置信息如下:
    monitor.threadpool.executors[0].thread-pool-name=first-monitor-thread-pool
    monitor.threadpool.executors[0].core-pool-size=4
    monitor.threadpool.executors[0].max-pool-size=8
    monitor.threadpool.executors[0].queue-capacity=100
    
    monitor.threadpool.executors[1].thread-pool-name=second-monitor-thread-pool
    monitor.threadpool.executors[1].core-pool-size=2
    monitor.threadpool.executors[1].max-pool-size=4
    monitor.threadpool.executors[1].queue-capacity=40
        
    /**
     * 线程池配置
     *
     * @author wangtongzhou 
     * @since 2022-03-11 21:41
     */
    @Data
    public class ThreadPoolProperties {
    
        /**
         * 线程池名称
         */
        private String threadPoolName;
    
        /**
         * 核心线程数
         */
        private Integer corePoolSize = Runtime.getRuntime().availableProcessors();
    
        /**
         * 最大线程数
         */
        private Integer maxPoolSize;
    
        /**
         * 队列最大数量
         */
        private Integer queueCapacity;
    
        /**
         * 拒绝策略
         */
        private String rejectedExecutionType = "AbortPolicy";
    
        /**
         * 空闲线程存活时间
         */
        private Long keepAliveTime = 1L;
    
        /**
         * 空闲线程存活时间单位
         */
        private TimeUnit unit = TimeUnit.MILLISECONDS;
    
    
    }
    
    
    /**
     * 动态刷新线程池配置
     *
     * @author wangtongzhou 
     * @since 2022-03-13 14:09
     */
    @ConfigurationProperties(prefix = "monitor.threadpool")
    @Data
    @Component
    public class DynamicThreadPoolProperties {
    
        private List<ThreadPoolProperties> executors;
    }
    
    1. 自定可修改阻塞队列大小的方式如下:
    /**
     * 可重新设定队列大小的阻塞队列
     *
     * @author wangtongzhou 
     * @since 2022-03-13 11:54
     */
    public class ResizableCapacityLinkedBlockingQueue<E> extends AbstractQueue<E>
            implements BlockingDeque<E>, java.io.Serializable {
        /*
         * Implemented as a simple doubly-linked list protected by a
         * single lock and using conditions to manage blocking.
         *
         * To implement weakly consistent iterators, it appears we need to
         * keep all Nodes GC-reachable from a predecessor dequeued Node.
         * That would cause two problems:
         * - allow a rogue Iterator to cause unbounded memory retention
         * - cause cross-generational linking of old Nodes to new Nodes if
         *   a Node was tenured while live, which generational GCs have a
         *   hard time dealing with, causing repeated major collections.
         * However, only non-deleted Nodes need to be reachable from
         * dequeued Nodes, and reachability does not necessarily have to
         * be of the kind understood by the GC.  We use the trick of
         * linking a Node that has just been dequeued to itself.  Such a
         * self-link implicitly means to jump to "first" (for next links)
         * or "last" (for prev links).
         */
    
        /*
         * We have "diamond" multiple interface/abstract class inheritance
         * here, and that introduces ambiguities. Often we want the
         * BlockingDeque javadoc combined with the AbstractQueue
         * implementation, so a lot of method specs are duplicated here.
         */
    
        private static final long serialVersionUID = -387911632671998426L;
    
        /**
         * Doubly-linked list node class
         */
        static final class Node<E> {
            /**
             * The item, or null if this node has been removed.
             */
            E item;
    
            /**
             * One of:
             * - the real predecessor Node
             * - this Node, meaning the predecessor is tail
             * - null, meaning there is no predecessor
             */
            Node<E> prev;
    
            /**
             * One of:
             * - the real successor Node
             * - this Node, meaning the successor is head
             * - null, meaning there is no successor
             */
            Node<E> next;
    
            Node(E x) {
                item = x;
            }
        }
    
        /**
         * Pointer to first node.
         * Invariant: (first == null && last == null) ||
         * (first.prev == null && first.item != null)
         */
        transient Node<E> first;
    
        /**
         * Pointer to last node.
         * Invariant: (first == null && last == null) ||
         * (last.next == null && last.item != null)
         */
        transient Node<E> last;
    
        /**
         * Number of items in the deque
         */
        private transient int count;
    
        /**
         * Maximum number of items in the deque
         */
        private volatile int capacity;
    
        public int getCapacity() {
            return capacity;
        }
    
        public void setCapacity(int capacity) {
            this.capacity = capacity;
        }
    
        /**
         * Main lock guarding all access
         */
        final ReentrantLock lock = new ReentrantLock();
    
        /**
         * Condition for waiting takes
         */
        private final Condition notEmpty = lock.newCondition();
    
        /**
         * Condition for waiting puts
         */
        private final Condition notFull = lock.newCondition();
    
        /**
         * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with a capacity of
         * {@link Integer#MAX_VALUE}.
         */
        public ResizableCapacityLinkedBlockingQueue() {
            this(Integer.MAX_VALUE);
        }
    
        /**
         * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with the given (fixed) capacity.
         *
         * @param capacity the capacity of this deque
         * @throws IllegalArgumentException if {@code capacity} is less than 1
         */
        public ResizableCapacityLinkedBlockingQueue(int capacity) {
            if (capacity <= 0) {
                throw new IllegalArgumentException();
            }
            this.capacity = capacity;
        }
    
        /**
         * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with a capacity of
         * {@link Integer#MAX_VALUE}, initially containing the elements of
         * the given collection, added in traversal order of the
         * collection's iterator.
         *
         * @param c the collection of elements to initially contain
         * @throws NullPointerException if the specified collection or any
         *                              of its elements are null
         */
        public ResizableCapacityLinkedBlockingQueue(Collection<? extends E> c) {
            this(Integer.MAX_VALUE);
            final ReentrantLock lock = this.lock;
            lock.lock(); // Never contended, but necessary for visibility
            try {
                for (E e : c) {
                    if (e == null) {
                        throw new NullPointerException();
                    }
                    if (!linkLast(new Node<E>(e))) {
                        throw new IllegalStateException("Deque full");
                    }
                }
            } finally {
                lock.unlock();
            }
        }
    
    
        // Basic linking and unlinking operations, called only while holding lock
    
        /**
         * Links node as first element, or returns false if full.
         */
        private boolean linkFirst(Node<E> node) {
            // assert lock.isHeldByCurrentThread();
            if (count >= capacity) {
                return false;
            }
            Node<E> f = first;
            node.next = f;
            first = node;
            if (last == null) {
                last = node;
            } else {
                f.prev = node;
            }
            ++count;
            notEmpty.signal();
            return true;
        }
    
        /**
         * Links node as last element, or returns false if full.
         */
        private boolean linkLast(Node<E> node) {
            // assert lock.isHeldByCurrentThread();
            if (count >= capacity) {
                return false;
            }
            Node<E> l = last;
            node.prev = l;
            last = node;
            if (first == null) {
                first = node;
            } else {
                l.next = node;
            }
            ++count;
            notEmpty.signal();
            return true;
        }
    
        /**
         * Removes and returns first element, or null if empty.
         */
        private E unlinkFirst() {
            // assert lock.isHeldByCurrentThread();
            Node<E> f = first;
            if (f == null) {
                return null;
            }
            Node<E> n = f.next;
            E item = f.item;
            f.item = null;
            f.next = f; // help GC
            first = n;
            if (n == null) {
                last = null;
            } else {
                n.prev = null;
            }
            --count;
            notFull.signal();
            return item;
        }
    
        /**
         * Removes and returns last element, or null if empty.
         */
        private E unlinkLast() {
            // assert lock.isHeldByCurrentThread();
            Node<E> l = last;
            if (l == null) {
                return null;
            }
            Node<E> p = l.prev;
            E item = l.item;
            l.item = null;
            l.prev = l; // help GC
            last = p;
            if (p == null) {
                first = null;
            } else {
                p.next = null;
            }
            --count;
            notFull.signal();
            return item;
        }
    
        /**
         * Unlinks x.
         */
        void unlink(Node<E> x) {
            // assert lock.isHeldByCurrentThread();
            Node<E> p = x.prev;
            Node<E> n = x.next;
            if (p == null) {
                unlinkFirst();
            } else if (n == null) {
                unlinkLast();
            } else {
                p.next = n;
                n.prev = p;
                x.item = null;
                // Don't mess with x's links.  They may still be in use by
                // an iterator.
                --count;
                notFull.signal();
            }
        }
    
        // BlockingDeque methods
    
        /**
         * @throws IllegalStateException if this deque is full
         * @throws NullPointerException  {@inheritDoc}
         */
        @Override
        public void addFirst(E e) {
            if (!offerFirst(e)) {
                throw new IllegalStateException("Deque full");
            }
        }
    
        /**
         * @throws IllegalStateException if this deque is full
         * @throws NullPointerException  {@inheritDoc}
         */
        @Override
        public void addLast(E e) {
            if (!offerLast(e)) {
                throw new IllegalStateException("Deque full");
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         */
        @Override
        public boolean offerFirst(E e) {
            if (e == null) {
                throw new NullPointerException();
            }
            Node<E> node = new Node<E>(e);
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return linkFirst(node);
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         */
        @Override
        public boolean offerLast(E e) {
            if (e == null) throw new NullPointerException();
            Node<E> node = new Node<E>(e);
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return linkLast(node);
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public void putFirst(E e) throws InterruptedException {
            if (e == null) {
                throw new NullPointerException();
            }
            Node<E> node = new Node<E>(e);
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                while (!linkFirst(node)) {
                    notFull.await();
                }
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public void putLast(E e) throws InterruptedException {
            if (e == null) {
                throw new NullPointerException();
            }
            Node<E> node = new Node<E>(e);
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                while (!linkLast(node)) {
                    notFull.await();
                }
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public boolean offerFirst(E e, long timeout, TimeUnit unit)
                throws InterruptedException {
            if (e == null) {
                throw new NullPointerException();
            }
            Node<E> node = new Node<E>(e);
            long nanos = unit.toNanos(timeout);
            final ReentrantLock lock = this.lock;
            lock.lockInterruptibly();
            try {
                while (!linkFirst(node)) {
                    if (nanos <= 0) {
                        return false;
                    }
                    nanos = notFull.awaitNanos(nanos);
                }
                return true;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public boolean offerLast(E e, long timeout, TimeUnit unit)
                throws InterruptedException {
            if (e == null) throw new NullPointerException();
            Node<E> node = new Node<E>(e);
            long nanos = unit.toNanos(timeout);
            final ReentrantLock lock = this.lock;
            lock.lockInterruptibly();
            try {
                while (!linkLast(node)) {
                    if (nanos <= 0) {
                        return false;
                    }
                    nanos = notFull.awaitNanos(nanos);
                }
                return true;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NoSuchElementException {@inheritDoc}
         */
        @Override
        public E removeFirst() {
            E x = pollFirst();
            if (x == null) {
                throw new NoSuchElementException();
            }
            return x;
        }
    
        /**
         * @throws NoSuchElementException {@inheritDoc}
         */
        @Override
        public E removeLast() {
            E x = pollLast();
            if (x == null) {
                throw new NoSuchElementException();
            }
            return x;
        }
    
        @Override
        public E pollFirst() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return unlinkFirst();
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E pollLast() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return unlinkLast();
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E takeFirst() throws InterruptedException {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                E x;
                while ((x = unlinkFirst()) == null) {
                    notEmpty.await();
                }
                return x;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E takeLast() throws InterruptedException {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                E x;
                while ((x = unlinkLast()) == null) {
                    notEmpty.await();
                }
                return x;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E pollFirst(long timeout, TimeUnit unit)
                throws InterruptedException {
            long nanos = unit.toNanos(timeout);
            final ReentrantLock lock = this.lock;
            lock.lockInterruptibly();
            try {
                E x;
                while ((x = unlinkFirst()) == null) {
                    if (nanos <= 0) {
                        return null;
                    }
                    nanos = notEmpty.awaitNanos(nanos);
                }
                return x;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E pollLast(long timeout, TimeUnit unit)
                throws InterruptedException {
            long nanos = unit.toNanos(timeout);
            final ReentrantLock lock = this.lock;
            lock.lockInterruptibly();
            try {
                E x;
                while ((x = unlinkLast()) == null) {
                    if (nanos <= 0) {
                        return null;
                    }
                    nanos = notEmpty.awaitNanos(nanos);
                }
                return x;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws NoSuchElementException {@inheritDoc}
         */
        @Override
        public E getFirst() {
            E x = peekFirst();
            if (x == null) {
                throw new NoSuchElementException();
            }
            return x;
        }
    
        /**
         * @throws NoSuchElementException {@inheritDoc}
         */
        @Override
        public E getLast() {
            E x = peekLast();
            if (x == null) {
                throw new NoSuchElementException();
            }
            return x;
        }
    
        @Override
        public E peekFirst() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return (first == null) ? null : first.item;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public E peekLast() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return (last == null) ? null : last.item;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public boolean removeFirstOccurrence(Object o) {
            if (o == null) {
                return false;
            }
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                for (Node<E> p = first; p != null; p = p.next) {
                    if (o.equals(p.item)) {
                        unlink(p);
                        return true;
                    }
                }
                return false;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public boolean removeLastOccurrence(Object o) {
            if (o == null) {
                return false;
            }
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                for (Node<E> p = last; p != null; p = p.prev) {
                    if (o.equals(p.item)) {
                        unlink(p);
                        return true;
                    }
                }
                return false;
            } finally {
                lock.unlock();
            }
        }
    
        // BlockingQueue methods
    
        /**
         * Inserts the specified element at the end of this deque unless it would
         * violate capacity restrictions.  When using a capacity-restricted deque,
         * it is generally preferable to use method {@link #offer(Object) offer}.
         *
         * <p>This method is equivalent to {@link #addLast}.
         *
         * @throws IllegalStateException if this deque is full
         * @throws NullPointerException  if the specified element is null
         */
        @Override
        public boolean add(E e) {
            addLast(e);
            return true;
        }
    
        /**
         * @throws NullPointerException if the specified element is null
         */
        @Override
        public boolean offer(E e) {
            return offerLast(e);
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public void put(E e) throws InterruptedException {
            putLast(e);
        }
    
        /**
         * @throws NullPointerException {@inheritDoc}
         * @throws InterruptedException {@inheritDoc}
         */
        @Override
        public boolean offer(E e, long timeout, TimeUnit unit)
                throws InterruptedException {
            return offerLast(e, timeout, unit);
        }
    
        /**
         * Retrieves and removes the head of the queue represented by this deque.
         * This method differs from {@link #poll poll} only in that it throws an
         * exception if this deque is empty.
         *
         * <p>This method is equivalent to {@link #removeFirst() removeFirst}.
         *
         * @return the head of the queue represented by this deque
         * @throws NoSuchElementException if this deque is empty
         */
        @Override
        public E remove() {
            return removeFirst();
        }
    
        @Override
        public E poll() {
            return pollFirst();
        }
    
        @Override
        public E take() throws InterruptedException {
            return takeFirst();
        }
    
        @Override
        public E poll(long timeout, TimeUnit unit) throws InterruptedException {
            return pollFirst(timeout, unit);
        }
    
        /**
         * Retrieves, but does not remove, the head of the queue represented by
         * this deque.  This method differs from {@link #peek peek} only in that
         * it throws an exception if this deque is empty.
         *
         * <p>This method is equivalent to {@link #getFirst() getFirst}.
         *
         * @return the head of the queue represented by this deque
         * @throws NoSuchElementException if this deque is empty
         */
        @Override
        public E element() {
            return getFirst();
        }
    
        @Override
        public E peek() {
            return peekFirst();
        }
    
        /**
         * Returns the number of additional elements that this deque can ideally
         * (in the absence of memory or resource constraints) accept without
         * blocking. This is always equal to the initial capacity of this deque
         * less the current {@code size} of this deque.
         *
         * <p>Note that you <em>cannot</em> always tell if an attempt to insert
         * an element will succeed by inspecting {@code remainingCapacity}
         * because it may be the case that another thread is about to
         * insert or remove an element.
         */
        @Override
        public int remainingCapacity() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return capacity - count;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * @throws UnsupportedOperationException {@inheritDoc}
         * @throws ClassCastException            {@inheritDoc}
         * @throws NullPointerException          {@inheritDoc}
         * @throws IllegalArgumentException      {@inheritDoc}
         */
        @Override
        public int drainTo(Collection<? super E> c) {
            return drainTo(c, Integer.MAX_VALUE);
        }
    
        /**
         * @throws UnsupportedOperationException {@inheritDoc}
         * @throws ClassCastException            {@inheritDoc}
         * @throws NullPointerException          {@inheritDoc}
         * @throws IllegalArgumentException      {@inheritDoc}
         */
        @Override
        public int drainTo(Collection<? super E> c, int maxElements) {
            if (c == null) {
                throw new NullPointerException();
            }
            if (c == this) {
                throw new IllegalArgumentException();
            }
            if (maxElements <= 0) {
                return 0;
            }
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                int n = Math.min(maxElements, count);
                for (int i = 0; i < n; i++) {
                    c.add(first.item);   // In this order, in case add() throws.
                    unlinkFirst();
                }
                return n;
            } finally {
                lock.unlock();
            }
        }
    
        // Stack methods
    
        /**
         * @throws IllegalStateException if this deque is full
         * @throws NullPointerException  {@inheritDoc}
         */
        @Override
        public void push(E e) {
            addFirst(e);
        }
    
        /**
         * @throws NoSuchElementException {@inheritDoc}
         */
        @Override
        public E pop() {
            return removeFirst();
        }
    
        // Collection methods
    
        /**
         * Removes the first occurrence of the specified element from this deque.
         * If the deque does not contain the element, it is unchanged.
         * More formally, removes the first element {@code e} such that
         * {@code o.equals(e)} (if such an element exists).
         * Returns {@code true} if this deque contained the specified element
         * (or equivalently, if this deque changed as a result of the call).
         *
         * <p>This method is equivalent to
         * {@link #removeFirstOccurrence(Object) removeFirstOccurrence}.
         *
         * @param o element to be removed from this deque, if present
         * @return {@code true} if this deque changed as a result of the call
         */
        @Override
        public boolean remove(Object o) {
            return removeFirstOccurrence(o);
        }
    
        /**
         * Returns the number of elements in this deque.
         *
         * @return the number of elements in this deque
         */
        @Override
        public int size() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                return count;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * Returns {@code true} if this deque contains the specified element.
         * More formally, returns {@code true} if and only if this deque contains
         * at least one element {@code e} such that {@code o.equals(e)}.
         *
         * @param o object to be checked for containment in this deque
         * @return {@code true} if this deque contains the specified element
         */
        @Override
        public boolean contains(Object o) {
            if (o == null) {
                return false;
            }
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                for (Node<E> p = first; p != null; p = p.next) {
                    if (o.equals(p.item)) {
                        return true;
                    }
                }
                return false;
            } finally {
                lock.unlock();
            }
        }
    
        /*
         * TODO: Add support for more efficient bulk operations.
         *
         * We don't want to acquire the lock for every iteration, but we
         * also want other threads a chance to interact with the
         * collection, especially when count is close to capacity.
         */
    
    //     /**
    //      * Adds all of the elements in the specified collection to this
    //      * queue.  Attempts to addAll of a queue to itself result in
    //      * {@code IllegalArgumentException}. Further, the behavior of
    //      * this operation is undefined if the specified collection is
    //      * modified while the operation is in progress.
    //      *
    //      * @param c collection containing elements to be added to this queue
    //      * @return {@code true} if this queue changed as a result of the call
    //      * @throws ClassCastException            {@inheritDoc}
    //      * @throws NullPointerException          {@inheritDoc}
    //      * @throws IllegalArgumentException      {@inheritDoc}
    //      * @throws IllegalStateException if this deque is full
    //      * @see #add(Object)
    //      */
    //     public boolean addAll(Collection<? extends E> c) {
    //         if (c == null)
    //             throw new NullPointerException();
    //         if (c == this)
    //             throw new IllegalArgumentException();
    //         final ReentrantLock lock = this.lock;
    //         lock.lock();
    //         try {
    //             boolean modified = false;
    //             for (E e : c)
    //                 if (linkLast(e))
    //                     modified = true;
    //             return modified;
    //         } finally {
    //             lock.unlock();
    //         }
    //     }
    
        /**
         * Returns an array containing all of the elements in this deque, in
         * proper sequence (from first to last element).
         *
         * <p>The returned array will be "safe" in that no references to it are
         * maintained by this deque.  (In other words, this method must allocate
         * a new array).  The caller is thus free to modify the returned array.
         *
         * <p>This method acts as bridge between array-based and collection-based
         * APIs.
         *
         * @return an array containing all of the elements in this deque
         */
        @Override
        @SuppressWarnings("unchecked")
        public Object[] toArray() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                Object[] a = new Object[count];
                int k = 0;
                for (Node<E> p = first; p != null; p = p.next) {
                    a[k++] = p.item;
                }
                return a;
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * Returns an array containing all of the elements in this deque, in
         * proper sequence; the runtime type of the returned array is that of
         * the specified array.  If the deque fits in the specified array, it
         * is returned therein.  Otherwise, a new array is allocated with the
         * runtime type of the specified array and the size of this deque.
         *
         * <p>If this deque fits in the specified array with room to spare
         * (i.e., the array has more elements than this deque), the element in
         * the array immediately following the end of the deque is set to
         * {@code null}.
         *
         * <p>Like the {@link #toArray()} method, this method acts as bridge between
         * array-based and collection-based APIs.  Further, this method allows
         * precise control over the runtime type of the output array, and may,
         * under certain circumstances, be used to save allocation costs.
         *
         * <p>Suppose {@code x} is a deque known to contain only strings.
         * The following code can be used to dump the deque into a newly
         * allocated array of {@code String}:
         *
         * <pre> {@code String[] y = x.toArray(new String[0]);}</pre>
         * <p>
         * Note that {@code toArray(new Object[0])} is identical in function to
         * {@code toArray()}.
         *
         * @param a the array into which the elements of the deque are to
         *          be stored, if it is big enough; otherwise, a new array of the
         *          same runtime type is allocated for this purpose
         * @return an array containing all of the elements in this deque
         * @throws ArrayStoreException  if the runtime type of the specified array
         *                              is not a supertype of the runtime type of every element in
         *                              this deque
         * @throws NullPointerException if the specified array is null
         */
        @Override
        @SuppressWarnings("unchecked")
        public <T> T[] toArray(T[] a) {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                if (a.length < count) {
                    a = (T[]) java.lang.reflect.Array.newInstance
                            (a.getClass().getComponentType(), count);
                }
                int k = 0;
                for (Node<E> p = first; p != null; p = p.next) {
                    a[k++] = (T) p.item;
                }
                if (a.length > k) {
                    a[k] = null;
                }
                return a;
            } finally {
                lock.unlock();
            }
        }
    
        @Override
        public String toString() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                Node<E> p = first;
                if (p == null) {
                    return "[]";
                }
                StringBuilder sb = new StringBuilder();
                sb.append('[');
                for (; ; ) {
                    E e = p.item;
                    sb.append(e == this ? "(this Collection)" : e);
                    p = p.next;
                    if (p == null) {
                        return sb.append(']').toString();
                    }
                    sb.append(',').append(' ');
                }
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * Atomically removes all of the elements from this deque.
         * The deque will be empty after this call returns.
         */
        @Override
        public void clear() {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                for (Node<E> f = first; f != null; ) {
                    f.item = null;
                    Node<E> n = f.next;
                    f.prev = null;
                    f.next = null;
                    f = n;
                }
                first = last = null;
                count = 0;
                notFull.signalAll();
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * Returns an iterator over the elements in this deque in proper sequence.
         * The elements will be returned in order from first (head) to last (tail).
         *
         * <p>The returned iterator is
         * <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
         *
         * @return an iterator over the elements in this deque in proper sequence
         */
        @Override
        public Iterator<E> iterator() {
            return new Itr();
        }
    
        /**
         * Returns an iterator over the elements in this deque in reverse
         * sequential order.  The elements will be returned in order from
         * last (tail) to first (head).
         *
         * <p>The returned iterator is
         * <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
         *
         * @return an iterator over the elements in this deque in reverse order
         */
        @Override
        public Iterator<E> descendingIterator() {
            return new DescendingItr();
        }
    
        /**
         * Base class for Iterators for ResizableCapacityLinkedBlockIngQueue
         */
        private abstract class AbstractItr implements Iterator<E> {
            /**
             * The next node to return in next()
             */
            Node<E> next;
    
            /**
             * nextItem holds on to item fields because once we claim that
             * an element exists in hasNext(), we must return item read
             * under lock (in advance()) even if it was in the process of
             * being removed when hasNext() was called.
             */
            E nextItem;
    
            /**
             * Node returned by most recent call to next. Needed by remove.
             * Reset to null if this element is deleted by a call to remove.
             */
            private Node<E> lastRet;
    
            abstract Node<E> firstNode();
    
            abstract Node<E> nextNode(Node<E> n);
    
            AbstractItr() {
                // set to initial position
                final ReentrantLock lock = ResizableCapacityLinkedBlockingQueue.this.lock;
                lock.lock();
                try {
                    next = firstNode();
                    nextItem = (next == null) ? null : next.item;
                } finally {
                    lock.unlock();
                }
            }
    
            /**
             * Returns the successor node of the given non-null, but
             * possibly previously deleted, node.
             */
            private Node<E> succ(Node<E> n) {
                // Chains of deleted nodes ending in null or self-links
                // are possible if multiple interior nodes are removed.
                for (; ; ) {
                    Node<E> s = nextNode(n);
                    if (s == null) {
                        return null;
                    } else if (s.item != null) {
                        return s;
                    } else if (s == n) {
                        return firstNode();
                    } else {
                        n = s;
                    }
                }
            }
    
            /**
             * Advances next.
             */
            void advance() {
                final ReentrantLock lock = ResizableCapacityLinkedBlockingQueue.this.lock;
                lock.lock();
                try {
                    // assert next != null;
                    next = succ(next);
                    nextItem = (next == null) ? null : next.item;
                } finally {
                    lock.unlock();
                }
            }
    
            @Override
            public boolean hasNext() {
                return next != null;
            }
    
            @Override
            public E next() {
                if (next == null) {
                    throw new NoSuchElementException();
                }
                lastRet = next;
                E x = nextItem;
                advance();
                return x;
            }
    
            @Override
            public void remove() {
                Node<E> n = lastRet;
                if (n == null) {
                    throw new IllegalStateException();
                }
                lastRet = null;
                final ReentrantLock lock = ResizableCapacityLinkedBlockingQueue.this.lock;
                lock.lock();
                try {
                    if (n.item != null) {
                        unlink(n);
                    }
                } finally {
                    lock.unlock();
                }
            }
        }
    
        /**
         * Forward iterator
         */
        private class Itr extends AbstractItr {
            @Override
            Node<E> firstNode() {
                return first;
            }
    
            @Override
            Node<E> nextNode(Node<E> n) {
                return n.next;
            }
        }
    
        /**
         * Descending iterator
         */
        private class DescendingItr extends AbstractItr {
            @Override
            Node<E> firstNode() {
                return last;
            }
    
            @Override
            Node<E> nextNode(Node<E> n) {
                return n.prev;
            }
        }
    
        /**
         * A customized variant of Spliterators.IteratorSpliterator
         */
        static final class LBDSpliterator<E> implements Spliterator<E> {
            static final int MAX_BATCH = 1 << 25;  // max batch array size;
            final ResizableCapacityLinkedBlockingQueue<E> queue;
            Node<E> current;    // current node; null until initialized
            int batch;          // batch size for splits
            boolean exhausted;  // true when no more nodes
            long est;           // size estimate
    
            LBDSpliterator(ResizableCapacityLinkedBlockingQueue<E> queue) {
                this.queue = queue;
                this.est = queue.size();
            }
    
            @Override
            public long estimateSize() {
                return est;
            }
    
            @Override
            public Spliterator<E> trySplit() {
                Node<E> h;
                final ResizableCapacityLinkedBlockingQueue<E> q = this.queue;
                int b = batch;
                int n = (b <= 0) ? 1 : (b >= MAX_BATCH) ? MAX_BATCH : b + 1;
                if (!exhausted &&
                        ((h = current) != null || (h = q.first) != null) &&
                        h.next != null) {
                    Object[] a = new Object[n];
                    final ReentrantLock lock = q.lock;
                    int i = 0;
                    Node<E> p = current;
                    lock.lock();
                    try {
                        if (p != null || (p = q.first) != null) {
                            do {
                                if ((a[i] = p.item) != null) {
                                    ++i;
                                }
                            } while ((p = p.next) != null && i < n);
                        }
                    } finally {
                        lock.unlock();
                    }
                    if ((current = p) == null) {
                        est = 0L;
                        exhausted = true;
                    } else if ((est -= i) < 0L) {
                        est = 0L;
                    }
                    if (i > 0) {
                        batch = i;
                        return Spliterators.spliterator
                                (a, 0, i, Spliterator.ORDERED | Spliterator.NONNULL |
                                        Spliterator.CONCURRENT);
                    }
                }
                return null;
            }
    
            @Override
            public void forEachRemaining(Consumer<? super E> action) {
                if (action == null) {
                    throw new NullPointerException();
                }
                final ResizableCapacityLinkedBlockingQueue<E> q = this.queue;
                final ReentrantLock lock = q.lock;
                if (!exhausted) {
                    exhausted = true;
                    Node<E> p = current;
                    do {
                        E e = null;
                        lock.lock();
                        try {
                            if (p == null) {
                                p = q.first;
                            }
                            while (p != null) {
                                e = p.item;
                                p = p.next;
                                if (e != null) {
                                    break;
                                }
                            }
                        } finally {
                            lock.unlock();
                        }
                        if (e != null) {
                            action.accept(e);
                        }
                    } while (p != null);
                }
            }
    
            @Override
            public boolean tryAdvance(Consumer<? super E> action) {
                if (action == null) {
                    throw new NullPointerException();
                }
                final ResizableCapacityLinkedBlockingQueue<E> q = this.queue;
                final ReentrantLock lock = q.lock;
                if (!exhausted) {
                    E e = null;
                    lock.lock();
                    try {
                        if (current == null) {
                            current = q.first;
                        }
                        while (current != null) {
                            e = current.item;
                            current = current.next;
                            if (e != null) {
                                break;
                            }
                        }
                    } finally {
                        lock.unlock();
                    }
                    if (current == null) {
                        exhausted = true;
                    }
                    if (e != null) {
                        action.accept(e);
                        return true;
                    }
                }
                return false;
            }
    
            @Override
            public int characteristics() {
                return Spliterator.ORDERED | Spliterator.NONNULL |
                        Spliterator.CONCURRENT;
            }
        }
    
        /**
         * Returns a {@link Spliterator} over the elements in this deque.
         *
         * <p>The returned spliterator is
         * <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
         *
         * <p>The {@code Spliterator} reports {@link Spliterator#CONCURRENT},
         * {@link Spliterator#ORDERED}, and {@link Spliterator#NONNULL}.
         *
         * @return a {@code Spliterator} over the elements in this deque
         * @implNote The {@code Spliterator} implements {@code trySplit} to permit limited
         * parallelism.
         * @since 1.8
         */
        @Override
        public Spliterator<E> spliterator() {
            return new LBDSpliterator<E>(this);
        }
    
        /**
         * Saves this deque to a stream (that is, serializes it).
         *
         * @param s the stream
         * @throws java.io.IOException if an I/O error occurs
         * @serialData The capacity (int), followed by elements (each an
         * {@code Object}) in the proper order, followed by a null
         */
        private void writeObject(java.io.ObjectOutputStream s)
                throws java.io.IOException {
            final ReentrantLock lock = this.lock;
            lock.lock();
            try {
                // Write out capacity and any hidden stuff
                s.defaultWriteObject();
                // Write out all elements in the proper order.
                for (Node<E> p = first; p != null; p = p.next) {
                    s.writeObject(p.item);
                }
                // Use trailing null as sentinel
                s.writeObject(null);
            } finally {
                lock.unlock();
            }
        }
    
        /**
         * Reconstitutes this deque from a stream (that is, deserializes it).
         *
         * @param s the stream
         * @throws ClassNotFoundException if the class of a serialized object
         *                                could not be found
         * @throws java.io.IOException    if an I/O error occurs
         */
        private void readObject(java.io.ObjectInputStream s)
                throws java.io.IOException, ClassNotFoundException {
            s.defaultReadObject();
            count = 0;
            first = null;
            last = null;
            // Read in all elements and place in queue
            for (; ; ) {
                @SuppressWarnings("unchecked")
                E item = (E) s.readObject();
                if (item == null) {
                    break;
                }
                add(item);
            }
        }
    }
    
    
    1. 自定义线程池,增加每个线程处理的耗时,以及平均耗时、最大耗时、最小耗时,以及输出监控日志信息等等;
    /**
     * 线程池监控类
     *
     * @author wangtongzhou 
     * @since 2022-02-23 07:27
     */
    public class ThreadPoolMonitor extends ThreadPoolExecutor {
    
        private static final Logger LOGGER = LoggerFactory.getLogger(ThreadPoolMonitor.class);
    
        /**
         * 默认拒绝策略
         */
        private static final RejectedExecutionHandler defaultHandler = new AbortPolicy();
    
        /**
         * 线程池名称,一般以业务名称命名,方便区分
         */
        private String poolName;
    
        /**
         * 最短执行时间
         */
        private Long minCostTime;
    
        /**
         * 最长执行时间
         */
        private Long maxCostTime;
        /**
         * 总的耗时
         */
        private AtomicLong totalCostTime = new AtomicLong();
    
        private ThreadLocal<Long> startTimeThreadLocal = new ThreadLocal<>();
    
        /**
         * 调用父类的构造方法,并初始化HashMap和线程池名称
         *
         * @param corePoolSize    线程池核心线程数
         * @param maximumPoolSize 线程池最大线程数
         * @param keepAliveTime   线程的最大空闲时间
         * @param unit            空闲时间的单位
         * @param workQueue       保存被提交任务的队列
         * @param poolName        线程池名称
         */
        public ThreadPoolMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                                 TimeUnit unit, BlockingQueue<Runnable> workQueue, String poolName) {
            this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
                    Executors.defaultThreadFactory(), poolName);
        }
    
    
        /**
         * 调用父类的构造方法,并初始化HashMap和线程池名称
         *
         * @param corePoolSize    线程池核心线程数
         * @param maximumPoolSize 线程池最大线程数
         * @param keepAliveTime   线程的最大空闲时间
         * @param unit            空闲时间的单位
         * @param workQueue       保存被提交任务的队列
         * @param
         * @param poolName        线程池名称
         */
        public ThreadPoolMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                                 TimeUnit unit, BlockingQueue<Runnable> workQueue, RejectedExecutionHandler handler, String poolName) {
            this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
                    Executors.defaultThreadFactory(), handler, poolName);
        }
    
    
        /**
         * 调用父类的构造方法,并初始化HashMap和线程池名称
         *
         * @param corePoolSize    线程池核心线程数
         * @param maximumPoolSize 线程池最大线程数
         * @param keepAliveTime   线程的最大空闲时间
         * @param unit            空闲时间的单位
         * @param workQueue       保存被提交任务的队列
         * @param threadFactory   线程工厂
         * @param poolName        线程池名称
         */
        public ThreadPoolMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                                 TimeUnit unit, BlockingQueue<Runnable> workQueue,
                                 ThreadFactory threadFactory, String poolName) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, defaultHandler);
            this.poolName = poolName;
        }
    
    
        /**
         * 调用父类的构造方法,并初始化HashMap和线程池名称
         *
         * @param corePoolSize    线程池核心线程数
         * @param maximumPoolSize 线程池最大线程数
         * @param keepAliveTime   线程的最大空闲时间
         * @param unit            空闲时间的单位
         * @param workQueue       保存被提交任务的队列
         * @param threadFactory   线程工厂
         * @param handler         拒绝策略
         * @param poolName        线程池名称
         */
        public ThreadPoolMonitor(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                                 TimeUnit unit, BlockingQueue<Runnable> workQueue,
                                 ThreadFactory threadFactory, RejectedExecutionHandler handler, String poolName) {
            super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
            this.poolName = poolName;
        }
    
    
        /**
         * 线程池延迟关闭时(等待线程池里的任务都执行完毕),统计线程池情况
         */
        @Override
        public void shutdown() {
            // 统计已执行任务、正在执行任务、未执行任务数量
            LOGGER.info("{} 关闭线程池, 已执行任务: {}, 正在执行任务: {}, 未执行任务数量: {}",
                    this.poolName, this.getCompletedTaskCount(), this.getActiveCount(), this.getQueue().size());
            super.shutdown();
        }
    
        /**
         * 线程池立即关闭时,统计线程池情况
         */
        @Override
        public List<Runnable> shutdownNow() {
            // 统计已执行任务、正在执行任务、未执行任务数量
            LOGGER.info("{} 立即关闭线程池,已执行任务: {}, 正在执行任务: {}, 未执行任务数量: {}",
                    this.poolName, this.getCompletedTaskCount(), this.getActiveCount(), this.getQueue().size());
            return super.shutdownNow();
        }
    
        /**
         * 任务执行之前,记录任务开始时间
         */
        @Override
        protected void beforeExecute(Thread t, Runnable r) {
            startTimeThreadLocal.set(System.currentTimeMillis());
        }
    
        /**
         * 任务执行之后,计算任务结束时间
         */
        @Override
        protected void afterExecute(Runnable r, Throwable t) {
            long costTime = System.currentTimeMillis() - startTimeThreadLocal.get();
            startTimeThreadLocal.remove();
            maxCostTime = maxCostTime > costTime ? maxCostTime : costTime;
            if (getCompletedTaskCount() == 0) {
                minCostTime = costTime;
            }
            minCostTime = minCostTime < costTime ? minCostTime : costTime;
            totalCostTime.addAndGet(costTime);
            LOGGER.info("{}-pool-monitor: " +
                            "任务耗时: {} ms, 初始线程数: {}, 核心线程数: {}, 执行的任务数量: {}, " +
                            "已完成任务数量: {}, 任务总数: {}, 队列里缓存的任务数量: {}, 池中存在的最大线程数: {}, " +
                            "最大允许的线程数: {},  线程空闲时间: {}, 线程池是否关闭: {}, 线程池是否终止: {}",
                    this.poolName,
                    costTime, this.getPoolSize(), this.getCorePoolSize(), this.getActiveCount(),
                    this.getCompletedTaskCount(), this.getTaskCount(), this.getQueue().size(), this.getLargestPoolSize(),
                    this.getMaximumPoolSize(), this.getKeepAliveTime(TimeUnit.MILLISECONDS), this.isShutdown(), this.isTerminated());
        }
    
    
        public Long getMinCostTime() {
            return minCostTime;
        }
    
        public Long getMaxCostTime() {
            return maxCostTime;
        }
    
        public long getAverageCostTime(){
            if(getCompletedTaskCount()==0||totalCostTime.get()==0){
                return 0;
            }
            return totalCostTime.get()/getCompletedTaskCount();
        }
    
        /**
         * 生成线程池所用的线程,改写了线程池默认的线程工厂,传入线程池名称,便于问题追踪
         */
        static class MonitorThreadFactory implements ThreadFactory {
            private static final AtomicInteger poolNumber = new AtomicInteger(1);
            private final ThreadGroup group;
            private final AtomicInteger threadNumber = new AtomicInteger(1);
            private final String namePrefix;
    
            /**
             * 初始化线程工厂
             *
             * @param poolName 线程池名称
             */
            MonitorThreadFactory(String poolName) {
                SecurityManager s = System.getSecurityManager();
                group = Objects.nonNull(s) ? s.getThreadGroup() : Thread.currentThread().getThreadGroup();
                namePrefix = poolName + "-pool-" + poolNumber.getAndIncrement() + "-thread-";
            }
    
            @Override
            public Thread newThread(Runnable r) {
                Thread t = new Thread(group, r, namePrefix + threadNumber.getAndIncrement(), 0);
                if (t.isDaemon()) {
                    t.setDaemon(false);
                }
                if (t.getPriority() != Thread.NORM_PRIORITY) {
                    t.setPriority(Thread.NORM_PRIORITY);
                }
                return t;
            }
        }
    }
    
    
    1. 动态修改线程池的类,通过Spring的监听器监控配置刷新方法,实现动态更新线程池的参数;
    /**
     * 动态刷新线程池
     *
     * @author wangtongzhou
     * @since 2022-03-13 14:13
     */
    @Component
    @Slf4j
    public class DynamicThreadPoolManager {
    
    
        @Autowired
        private DynamicThreadPoolProperties dynamicThreadPoolProperties;
    
        /**
         * 存储线程池对象
         */
        public Map<String, ThreadPoolMonitor> threadPoolExecutorMap = new HashMap<>();
    
    
        public Map<String, ThreadPoolMonitor> getThreadPoolExecutorMap() {
            return threadPoolExecutorMap;
        }
    
    
        /**
         * 初始化线程池
         */
        @PostConstruct
        public void init() {
            createThreadPools(dynamicThreadPoolProperties);
        }
    
        /**
         * 初始化线程池的创建
         *
         * @param dynamicThreadPoolProperties
         */
        private void createThreadPools(DynamicThreadPoolProperties dynamicThreadPoolProperties) {
            dynamicThreadPoolProperties.getExecutors().forEach(config -> {
                if (!threadPoolExecutorMap.containsKey(config.getThreadPoolName())) {
                    ThreadPoolMonitor threadPoolMonitor = new ThreadPoolMonitor(
                            config.getCorePoolSize(),
                            config.getMaxPoolSize(),
                            config.getKeepAliveTime(),
                            config.getUnit(),
                            new ResizableCapacityLinkedBlockingQueue<>(config.getQueueCapacity()),
                            RejectedExecutionHandlerEnum.getRejectedExecutionHandler(config.getRejectedExecutionType()),
                            config.getThreadPoolName()
                    );
                    threadPoolExecutorMap.put(config.getThreadPoolName(),
                            threadPoolMonitor);
                }
    
            });
        }
    
        /**
         * 调整线程池
         *
         * @param dynamicThreadPoolProperties
         */
        private void changeThreadPools(DynamicThreadPoolProperties dynamicThreadPoolProperties) {
            dynamicThreadPoolProperties.getExecutors().forEach(config -> {
                ThreadPoolExecutor threadPoolExecutor = threadPoolExecutorMap.get(config.getThreadPoolName());
                if (Objects.nonNull(threadPoolExecutor)) {
                    threadPoolExecutor.setCorePoolSize(config.getCorePoolSize());
                    threadPoolExecutor.setMaximumPoolSize(config.getMaxPoolSize());
                    threadPoolExecutor.setKeepAliveTime(config.getKeepAliveTime(), config.getUnit());
                    threadPoolExecutor.setRejectedExecutionHandler(RejectedExecutionHandlerEnum.getRejectedExecutionHandler(config.getRejectedExecutionType()));
                    BlockingQueue<Runnable> queue = threadPoolExecutor.getQueue();
                    if (queue instanceof ResizableCapacityLinkedBlockingQueue) {
                        ((ResizableCapacityLinkedBlockingQueue<Runnable>) queue).setCapacity(config.getQueueCapacity());
                    }
                }
            });
        }
    
    
        @EventListener
        public void envListener(EnvironmentChangeEvent event) {
            log.info("配置发生变更" + event);
            changeThreadPools(dynamicThreadPoolProperties);
        }
    
    }
    
    1. DynamicThreadPoolPropertiesController对外暴露两个方法,第一个通过ContextRefresher提供对外刷新配置的接口,实现及时更新配置信息,第二提供一个查询接口的方法,
    /**
     * 动态修改线程池参数
     *
     * @author wangtongzhou
     * @since 2022-03-13 17:27
     */
    @RestController
    public class DynamicThreadPoolPropertiesController {
    
        @Autowired
        private ContextRefresher contextRefresher;
    
    
        @Autowired
        private DynamicThreadPoolProperties dynamicThreadPoolProperties;
    
    
        @Autowired
        private DynamicThreadPoolManager dynamicThreadPoolManager;
    
    
        @PostMapping("/threadPool/properties")
        public void update() {
            ThreadPoolProperties threadPoolProperties =
                    dynamicThreadPoolProperties.getExecutors().get(0);
            threadPoolProperties.setCorePoolSize(20);
            threadPoolProperties.setMaxPoolSize(50);
            threadPoolProperties.setQueueCapacity(200);
            threadPoolProperties.setRejectedExecutionType("CallerRunsPolicy");
            contextRefresher.refresh();
        }
    
        @GetMapping("/threadPool/properties")
        public Map<String, Object> queryThreadPoolProperties() {
            Map<String, Object> metricMap = new HashMap<>();
            List<Map> threadPools = new ArrayList<>();
            dynamicThreadPoolManager.getThreadPoolExecutorMap().forEach((k, v) -> {
                ThreadPoolMonitor threadPoolMonitor = (ThreadPoolMonitor) v;
                Map<String, Object> poolInfo = new HashMap<>();
                poolInfo.put("thread.pool.name", k);
                poolInfo.put("thread.pool.core.size", threadPoolMonitor.getCorePoolSize());
                poolInfo.put("thread.pool.largest.size", threadPoolMonitor.getLargestPoolSize());
                poolInfo.put("thread.pool.max.size", threadPoolMonitor.getMaximumPoolSize());
                poolInfo.put("thread.pool.thread.count", threadPoolMonitor.getPoolSize());
                poolInfo.put("thread.pool.max.costTime", threadPoolMonitor.getMaxCostTime());
                poolInfo.put("thread.pool.average.costTime", threadPoolMonitor.getAverageCostTime());
                poolInfo.put("thread.pool.min.costTime", threadPoolMonitor.getMinCostTime());
                poolInfo.put("thread.pool.active.count", threadPoolMonitor.getActiveCount());
                poolInfo.put("thread.pool.completed.taskCount", threadPoolMonitor.getCompletedTaskCount());
                poolInfo.put("thread.pool.queue.name", threadPoolMonitor.getQueue().getClass().getName());
                poolInfo.put("thread.pool.rejected.name", threadPoolMonitor.getRejectedExecutionHandler().getClass().getName());
                poolInfo.put("thread.pool.task.count", threadPoolMonitor.getTaskCount());
                threadPools.add(poolInfo);
            });
            metricMap.put("threadPools", threadPools);
            return metricMap;
        }
    
    }
    

    整体上的流程到这里就完成了,算是一个Demo版,对于该组件更深入的思考我认为还可以做以下三件事情:

    1. 应该以starter的形式嵌入到应用,通过判断启动类加载的Appllo、Nacos还是默认实现;
    2. 对外可以Push、也可以是日志,还可以支持各种库,提供丰富的输出形式,这个样子的话更加通用化;
    3. 提供统一查询接口、修改接口、增加权限校验、增加预警规则配置;

    参考以下内容:

    美团文章

    结束

    欢迎大家点点关注,点点赞!

    展开全文
  • Tomcat线程池监控及线程池原理分析

    千次阅读 2022-03-27 22:35:24
    当有新任务时,Tomcat的线程池核心线程如果已经创建完了,Tomcat会尽最大努力开启新的非核心线程去执行新任务,而JUC的ThreadPoolExecutor则是入队,等待队列满了再创建新的非核心线程去执行任务。

      目录

            一、背景

            二、tomcat线程池监控

            三、tomcat线程池原理

            四、总结


    一、背景

    我们都知道稳定性、高可用对于一个系统来讲是非常重要的,而为了保证系统的稳定性,我们一般都会进行各方面的监控,以便系统有任何异常情况时,开发人员能够及时的感知到,这些监控比如缓存服务redis的监控,数据库服务mysql的监控,系统流量监控,系统jvm监控等等,除了这些监控,还有一种监控也是很有必要的,那就是线程池的监控。

    说起线程池的监控可能我们一般想到的是我们自定义的线程池或者接入的中间件比如hystrix的线程池监控,但是其实还有一个线程池其实一直伴随着我们的开发生涯,日日用而不知,那就是SpringBoot内嵌Tomcat的线程池。

    今天,这篇文章就来介绍SpringBoot内嵌Tomcat线程池监控及Tomcat的线程池原理分析。

    二、tomcat线程池监控

    既然我们要监控Tomcat的线程池,那么很自然的思路就是我们怎么获取到Tomcat的线程池对象,如果我们能够获取到Tomcat的线程池对象,那么,线程池的各项指标信息我们就能获取了。

    如果我们想看Tomcat使用的线程池,那么正常的做法就是看源码了,跟随SpringApplication.run(AppApplication.class, args)启动方法一路进行源码跟踪。

    在这里我就不一一跟踪源码进行讲解了,感兴趣的同学可以自己动手调试下,这里我分享一个Tomcat的架构图,这对于跟踪 Tomcat的源码非常有帮助。

    Tomcat的源码也都是基于架构一部分一部分进行实现的。

    在这里我就直接给出答案了:

    ①、Tomcat线程池的创建是在AbstractEndpoint这个抽象类中执行的。

    也就是下面这段源码:

    AbstractEndpoint#createExecutor()
    public void createExecutor() {
          internalExecutor = true;
          TaskQueue taskqueue = new TaskQueue();
          TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
          #创建tomcat内置线程池
          executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
          taskqueue.setParent( (ThreadPoolExecutor) executor);
    }

    ②、Tomcat的Servlet容器实现了WebServerApplicationContext或者ApplicationContext这个接口,所以我们注入WebServerApplicationContext或者直接注入ApplicationContext就能获取到Tomcat线程池对象。

    如下所示:

    代码如下:​​​​​​​

    //获取webServer线程池
    ThreadPoolExecutor executor = (ThreadPoolExecutor) ((TomcatWebServer) webServerApplicationContext.getWebServer())
            .getTomcat()
            .getConnector()
            .getProtocolHandler()
            .getExecutor();

    好了,到这里我们就获取到Tomcat线程池对象了,有了线程池对象我们就可以对其进行监控,定时获取其监控指标,以便在服务异常时能告警通知。

    这里我再简单介绍下获取到的Tomcat线程池对象ThreadPoolExecutor executor的一些指标意义。

    其各项监控指标如下:

    //获取webServer线程池
    ThreadPoolExecutor executor = (ThreadPoolExecutor) ((TomcatWebServer) webServerApplicationContext.getWebServer())
            .getTomcat()
            .getConnector()
            .getProtocolHandler()
            .getExecutor();
    Map<String, String> returnMap = new LinkedHashMap<>();
    returnMap.put("核心线程数", String.valueOf(executor.getCorePoolSize()));
    returnMap.put("最大线程数", String.valueOf(executor.getMaximumPoolSize()));
    returnMap.put("活跃线程数", String.valueOf(executor.getActiveCount()));
    returnMap.put("池中当前线程数", String.valueOf(executor.getPoolSize()));
    returnMap.put("历史最大线程数", String.valueOf(executor.getLargestPoolSize()));
    returnMap.put("线程允许空闲时间/s", String.valueOf(executor.getKeepAliveTime(TimeUnit.SECONDS)));
    returnMap.put("核心线程数是否允许被回收", String.valueOf(executor.allowsCoreThreadTimeOut()));
    returnMap.put("提交任务总数", String.valueOf(executor.getSubmittedCount()));
    returnMap.put("历史执行任务的总数(近似值)", String.valueOf(executor.getTaskCount()));
    returnMap.put("历史完成任务的总数(近似值)", String.valueOf(executor.getCompletedTaskCount()));
    returnMap.put("工作队列任务数量", String.valueOf(executor.getQueue().size()));
    returnMap.put("拒绝策略", executor.getRejectedExecutionHandler().getClass().getSimpleName());

    三、tomcat线程池原理

    在上面介绍了获取到的Tomcat线程池对象ThreadPoolExecutor executor,我们一看这个线程池类,竟然是ThreadPoolExecutor ,难道就是JUC并发包中的ThreadPoolExecutor ?聪明的我赶紧看看Tomcat的源码,非也非也,原来这个ThreadPoolExecutor 是Tomcat扩展了java.util.concurrent.ThreadPoolExecutor,Tomcat根据自己独特的业务场景定制实现的一个线程池。

    如下图

    其实如果看下这个org.apache.tomcat.util.threads包里面的ThreadPoolExecutor的实现的话,我们会惊奇的发现,这个org.apache.tomcat.util.threads包里面的ThreadPoolExecutor和java.util.concurrent.ThreadPoolExecutor的实现大致都是相同的,在这里,我就详细介绍下两个ThreadPoolExecutor在执行具体的任务时是怎么实现的,有什么区别。

    在这里先列出两个ThreadPoolExecutor的执行逻辑

    org.apache.tomcat.util.threads包里面的ThreadPoolExecutor​​​​​​​

    public void execute(Runnable command, long timeout, TimeUnit unit) {
        submittedCount.incrementAndGet();
        try {
            executeInternal(command);
        } catch (RejectedExecutionException rx) {
            if (getQueue() instanceof TaskQueue) {
                // If the Executor is close to maximum pool size, concurrent
                // calls to execute() may result (due to Tomcat's use of
                // TaskQueue) in some tasks being rejected rather than queued.
                // If this happens, add them to the queue.
                final TaskQueue queue = (TaskQueue) getQueue();
                try {
                    if (!queue.force(command, timeout, unit)) {
                        submittedCount.decrementAndGet();
                        throw new RejectedExecutionException(sm.getString("threadPoolExecutor.queueFull"));
                    }
                } catch (InterruptedException x) {
                    submittedCount.decrementAndGet();
                    throw new RejectedExecutionException(x);
                }
            } else {
                submittedCount.decrementAndGet();
                throw rx;
            }
        }
    }
    
    private void executeInternal(Runnable command) {
      if (command == null) {
          throw new NullPointerException();
      }
      /*
       * Proceed in 3 steps:
       *
       * 1. If fewer than corePoolSize threads are running, try to
       * start a new thread with the given command as its first
       * task.  The call to addWorker atomically checks runState and
       * workerCount, and so prevents false alarms that would add
       * threads when it shouldn't, by returning false.
       *
       * 2. If a task can be successfully queued, then we still need
       * to double-check whether we should have added a thread
       * (because existing ones died since last checking) or that
       * the pool shut down since entry into this method. So we
       * recheck state and if necessary roll back the enqueuing if
       * stopped, or start a new thread if there are none.
       *
       * 3. If we cannot queue task, then we try to add a new
       * thread.  If it fails, we know we are shut down or saturated
       * and so reject the task.
       */
      int c = ctl.get();
      if (workerCountOf(c) < corePoolSize) {
          if (addWorker(command, true)) {
              return;
          }
          c = ctl.get();
      }
      if (isRunning(c) && workQueue.offer(command)) {
          int recheck = ctl.get();
          if (! isRunning(recheck) && remove(command)) {
              reject(command);
          } else if (workerCountOf(recheck) == 0) {
              addWorker(null, false);
          }
      }
      else if (!addWorker(command, false)) {
          reject(command);
      }
    }

    java.util.concurrent.ThreadPoolExecutor​​​​​​​

    public void execute(Runnable command) {
      if (command == null)
          throw new NullPointerException();
      /*
       * Proceed in 3 steps:
       *
       * 1. If fewer than corePoolSize threads are running, try to
       * start a new thread with the given command as its first
       * task.  The call to addWorker atomically checks runState and
       * workerCount, and so prevents false alarms that would add
       * threads when it shouldn't, by returning false.
       *
       * 2. If a task can be successfully queued, then we still need
       * to double-check whether we should have added a thread
       * (because existing ones died since last checking) or that
       * the pool shut down since entry into this method. So we
       * recheck state and if necessary roll back the enqueuing if
       * stopped, or start a new thread if there are none.
       *
       * 3. If we cannot queue task, then we try to add a new
       * thread.  If it fails, we know we are shut down or saturated
       * and so reject the task.
       */
      int c = ctl.get();
      if (workerCountOf(c) < corePoolSize) {
          if (addWorker(command, true))
              return;
          c = ctl.get();
      }
      if (isRunning(c) && workQueue.offer(command)) {
          int recheck = ctl.get();
          if (! isRunning(recheck) && remove(command))
              reject(command);
          else if (workerCountOf(recheck) == 0)
              addWorker(null, false);
      }
      else if (!addWorker(command, false))
          reject(command);
    }

    相较于JDK 自带的ThreadPoolExecutor,上面多了 submittedCount.incrementAndGet() 和 catch 异常之后的那部分代码。

    submittedCount,是一个 AtomicInteger ,意义是已提交但尚未完成的任务数,这包括队列中的任务和已交给工作线程但尚未执行完成的任务。catch 中的代码很好理解,作用是让被拒绝的请求再次加入到队列中,尽力处理任务。

    然后再来看 executeInternal 方法,其实你会发现executeInternal 方法的执行逻辑和java.util.concurrent.ThreadPoolExecutor的execute()执行逻辑竟然完全相同,这令我们很迷惑,难道直接就是复用的java.util.concurrent.ThreadPoolExecutor的执行逻辑,假如说直接就是复用的java.util.concurrent.ThreadPoolExecutor的执行逻辑,那么直接super.execute()不就完了,还有必要重写一遍代码吗?

    这个时候我们就要回到之前创建Tomcat的线程池的现场,看看创建线程池的时候和JUC里面到底有哪些不一样,因为看代码他们execute()执行逻辑完全一样,那肯定是具体执行的时候有些实现不一样。否则,Tomcat的开发者是绝对不会笨到重写一遍java.util.concurrent.ThreadPoolExecutor的执行逻辑的。

    重新来到Tomcat创建线程池的时机,也即是下面这段代码。​​​​​​​

    AbstractEndpoint#createExecutor()
    public void createExecutor() {
          internalExecutor = true;
          TaskQueue taskqueue = new TaskQueue();
          TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
          #创建tomcat内置线程池
          executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
          taskqueue.setParent( (ThreadPoolExecutor) executor);
    }

    我们看ThreadPoolExecutor构造方法的参数,核心线程数、最大线程数这些没有太大意义就不用看了,重点关注taskqueue这个参数。

    taskqueue是Tomcat根据自身的独特业务场景逻辑实现了阻塞队列LinkedBlockingQueue<Runnable>,然后我们结合之前Tomcat的execute()逻辑,看在execute()里面,taskqueue是怎么执行的。

    也即是下面这段代码​​​​​​​

    @Override
    public boolean offer(Runnable o) {
      //we can't do any checks
        if (parent==null) {
            return super.offer(o);
        }
        //we are maxed out on threads, simply queue the object
        if (parent.getPoolSize() == parent.getMaximumPoolSize()) {
            return super.offer(o);
        }
        //we have idle threads, just add it to the queue
        if (parent.getSubmittedCount()<=(parent.getPoolSize())) {
            return super.offer(o);
        }
        //if we have less threads than maximum force creation of a new thread
        if (parent.getPoolSize()<parent.getMaximumPoolSize()) {
            return false;
        }
        //if we reached here, we need to add it to the queue
        return super.offer(o);
    }

    首先如果parent 为 null,直接入队,实际上这个 parent 就是ThreadPoolExecutor,在刚才Tomcat创建线程池的地方有这个代码。

    然后,parent.getPoolSize() 的作用是返回当前线程池中的线程数,如果等于最大线程数,则直接入队,等待后续执行这个任务。

    然后parent.getSubmittedCount()标识提交的任务数量,如果小于等于线程数量,标识有空闲的线程在等待任务,所以这个时候也是直接入队,让空闲线程立即去执行任务,

    再然后,parent.getPoolSize()<parent.getMaximumPoolSize()表示线程池中的线程数量如果小于限制的最大线程数量,那么这个时候就强制开启新线程,去执行任务。这个时候就是返回false,我们看Tomcat的ThreadPoolExecutor的execute()在taskqueue.offer()方法,返回false了,就去执行addWorker()开启新线程了。

    最后,执行到最后,默认就直接入队了。

    好了,这就是Tomcat的线程池执行所有逻辑了,这个时候我们再反过来去看java.util.concurrent.ThreadPoolExecutor的execute()方法,虽然他的执行逻辑和Tomcat的org.apache.tomcat.util.threads包里面的ThreadPoolExecutor的execute()方法执行逻辑表面完全相同,但是他在执行taskqueue.offer()的时候,其实是直接执行的LinkedBlockingQueue或者其他阻塞队列的逻辑,直接入队了,这就是Tomcat的线程池和JUC线程池最大的一点不同。

    四、总结

    ①、Tomcat线程池的创建是在AbstractEndpoint这个抽象类中执行的。

    ②、注入WebServerApplicationContext或者直接注入ApplicationContext就能获取到Tomcat线程池对象。

    ③、当有新任务时,Tomcat的线程池核心线程如果已经创建完了,Tomcat会尽最大努力开启新的非核心线程去执行新任务,而JUC的ThreadPoolExecutor则是入队,等待队列满了再创建新的非核心线程去执行任务。

    以上是个人的亲身经历及总结经验,个人之见,难免考虑不全,如果大家有更好的建议欢迎大家私信留言。

    如果觉得对你有一点点帮助,希望能够动动小手,你的点赞是对我最大的鼓励支持。

    更多分享请移步至个人公众号,谢谢支持😜😜......

    公众号:wenyixicodedog  

    展开全文
  • Java线程池监控应用

    2022-02-22 22:01:59
    线程池是我们平时开发中使用较多的一种组件,其主要监控点在于池中的线程和阻塞队列中的任务情况。 构建一个线程池 先构建一个基本的线程池,并看看有哪些参数我们可以直接获取到 public class ThreadPoolMonitor { ...
  • 因此,我们需要一种简单的监控方案来监控线程池的使用情况,比如完成任务数量、未完成任务数量、线程大小等信息。 ExecutorsUtil工具类 以下是我们开发的一个线程池工具类,该工具类扩展ThreadPoolExecutor实现了...
  • Dubbo 线程池监控(荣耀典藏版)

    万次阅读 2022-07-14 15:50:37
    Dubbo 线程池监控(荣耀典藏版)
  • Java线程池实现原理及其在美团业务中的实践 线程池是什么 线程池(Thread Pool)是一种基于池化思想管理线程的工具,经常出现在多线程服务器中,如MySQL。 线程过多会带来额外的开销,其中包括创建销毁线程的开销、...
  • 线程池是一种 “池化” 的线程使用模式,通过创建一定数量的线程,让这些线程处于就绪状态来提高系统响应速度,在线程使用完成后归还到线程池来达到重复利用的目标,从而降低系统资源的消耗。使用线程池,有如下优势...
  • SkyWalking Java Agent 8.10.0 增加了对database连接池和tomcat线程池监控的plugin:skywalking.apache.org/events/rele… 如果我们自己想增加一些针对于自己业务的线程池的监控,Skywalking也是提供了拓展点的。 ...
  • Java线程池监控小结

    2021-02-12 17:36:22
    最近我们组杨青同学遇到一个使用线程池不当的问题:异步处理的线程池线程将主线程hang住了,分析代码发现是线程池的拒绝策略设置得不合理,设置为CallerRunsPolicy。...线程池的运行状况,也需要监控关于线程...
  • threadPool 线程池监控

    千次阅读 2021-06-15 17:53:29
    xueqiu-toolbox-spring提供了spring中Executor类型bean的默认监控:集成监控Executor的线程池参数 <!-- 版本从50开始:0.0.50 --> <dependency> <groupId>com.xueqiu.infra.toolbox</...
  • 美团点评 Cat监控 线程池监控

    千次阅读 2020-03-14 16:38:38
    1.自定义线程池 2.加入Cat监控
  • 背景在开发中,我们经常要使用Executors类创建线程池来执行大量的任务,使用线程池...因此,我们需要一种简单的监控方案来监控线程池的使用情况,比如完成任务数量、未完成任务数量、线程大小等信息。ExecutorsUtil...
  • 目录Java线程池核心原理与最佳实践前言Java中的线程池功能快捷键合理的创建标题,有助于目录的生成如何改变文本的样式插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列表创建一个表格设定内容居中、居左、...
  • dubbo-线程池监控

    2021-05-14 16:20:28
    代码//dubbo线程池数量监控Class> clazz = Class.forName("com.alibaba.dubbo.rpc.protocol.dubbo.status.ThreadPoolStatusChecker");Method check = clazz.getMethod("check");Object result = check.invoke...
  • 接口名) 以上3步就已经完成了对线程池监控, 在上一篇duubo入门案例中,provider模块的pom文件中,引入此模块。 修改provider模块中,HelloServiceImpl的SayHello方法 public String sayHello(String name) { ...
  • /*** 调用父类的构造方法,并初始化HashMap和线程池名称 * *@paramcorePoolSize 线程池核心线程数 *@parammaximumPoolSize 线程池最大线程数 *@paramkeepAliveTime 线程的最大空闲时间 *@paramunit 空闲时间的单位 *...
  • 前言 micrometer 中自带了很多其他框架的指标信息,可以很方便的通过 prometheus 进行采集和监控,常用的有 JVM 的信息,Http 请求的信息,Tomcat 线程的信息...
  • Dubbo并发数及线程池监控

    千次阅读 2018-03-13 11:31:00
    说明:dubbo线程池监控主要是监控服务提供方处理服务请求的线程池。 原理:服务端每次接收到请求,Cat记录线程池的使用状况。 1、设置服务提供方请求分发器默认是“all”,改设置成“threadMonitor”。该请求...
  • 线程池监控——自定义线程池

    千次阅读 2018-04-27 18:50:10
    如果在系统中大量使用线程池,则有必要对线程池进行监控,方便在出现问题时,可以根据线程池的使用状态快速定位问题。可以根据线程池提供的参数进行监控,常用属性如下: taskCount:线程池需要执行的任务数量 ...
  • 项目地址:https://github.com/yxkong/agent目前已实现对线程池、jvm的监控;主要是在线程池创建时,对线程池进行增强,将线程池强引用到bootstrap-load-client-0.1.jar中的ThreadPoolMonitorData里。 使用此包...
  • Java监控线程池

    2021-03-02 10:54:22
    Java监控线程池ThreadPoolExecutor提供了一组方法用于监控线程池。int getActiveCount() 获得线程池中当前活动线程的数量。long getCompletedTaskCount() 返回线程池完成任务的数量。int getCorePoolSize() 线程池中...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 64,279
精华内容 25,711
关键字:

线程池监控

友情链接: GLCM-master (2).zip