狙兕 发表于 2025-6-11 16:54:24

Apache DolphinScheduler-1.3.9源码分析(二)

引言

随着大数据的发展,任务调度系统成为了数据处理和管理中至关重要的部分。Apache DolphinScheduler 是一款优秀的开源分布式工作流调度平台,在大数据场景中得到广泛应用。

在本文中,我们将对 Apache DolphinScheduler 1.3.9 版本的源码进行深入分析,主要分析一下Master和Worker的交互设计。
感兴趣的朋友也可以回顾我们上一篇文章:Apache DolphinScheduler-1.3.9源码分析(一)
Worker配置文件

# worker listener port
worker.listen.port=1234

# worker execute thread number to limit task instances in parallel
# worker可并行的任务数限制
worker.exec.threads=100

# worker heartbeat interval, the unit is second
# worker发送心跳间隔
worker.heartbeat.interval=10

# worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
# worker最大cpu平均负载,只有系统cpu平均负载低于该值,才能执行任务
# 默认值为-1,则最大cpu平均负载=系统cpu核数 * 2
worker.max.cpuload.avg=-1

# worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
# worker的预留内存,只有当系统可用内存大于等于该值,才能执行任务,单位为GB
# 默认0.3G
worker.reserved.memory=0.3

# default worker groups separated by comma, like 'worker.groups=default,test'
# 工作组名称,多个用,隔开
worker.groups=defaultWorkerServer启动

public void run() {
    // init remoting server
    NettyServerConfig serverConfig = new NettyServerConfig();
    serverConfig.setListenPort(workerConfig.getListenPort());
    this.nettyRemotingServer = new NettyRemotingServer(serverConfig);
    this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_REQUEST, new TaskExecuteProcessor());
    this.nettyRemotingServer.registerProcessor(CommandType.TASK_KILL_REQUEST, new TaskKillProcessor());
    this.nettyRemotingServer.registerProcessor(CommandType.DB_TASK_ACK, new DBTaskAckProcessor());
    this.nettyRemotingServer.registerProcessor(CommandType.DB_TASK_RESPONSE, new DBTaskResponseProcessor());
    this.nettyRemotingServer.start();

    // worker registry
    try {
      this.workerRegistry.registry();
      this.workerRegistry.getZookeeperRegistryCenter().setStoppable(this);
      Set<String> workerZkPaths = this.workerRegistry.getWorkerZkPaths();
      this.workerRegistry.getZookeeperRegistryCenter().getRegisterOperator().handleDeadServer(workerZkPaths, ZKNodeType.WORKER, Constants.DELETE_ZK_OP);
    } catch (Exception e) {
      logger.error(e.getMessage(), e);
      throw new RuntimeException(e);
    }

    // retry report task status
    this.retryReportTaskStatusThread.start();

    /**
   * register hooks, which are called before the process exits
   */
    Runtime.getRuntime().addShutdownHook(new Thread(() -> {
      if (Stopper.isRunning()) {
            close("shutdownHook");
      }
    }));
}注册四个Command:


[*]TASK_EXECUTE_REQUEST:task执行请求
[*]TASK_KILL_REQUEST:task停止请求
[*]DB_TASK_ACK:Worker接受到Master的调度请求,回应master
[*]DB_TASK_RESPONSE:


[*]注册WorkerServer到Zookeeper,并发送心跳
[*]报告Task执行状态
RetryReportTaskStatusThread

这是一个兜底机制,主要负责定时轮询向Master汇报任务的状态,直到Master回复状态的ACK,避免任务状态丢失;
每隔5分钟,检查一下responceCache中的ACK Cache和Response Cache是否为空,如果不为空则向Master发送ack_command和response command请求。
public void run() {
    ResponceCache responceCache = ResponceCache.get();

    while (Stopper.isRunning()){

      // sleep 5 minutes
      ThreadUtils.sleep(RETRY_REPORT_TASK_STATUS_INTERVAL);

      try {
            if (!responceCache.getAckCache().isEmpty()){
                Map<Integer,Command> ackCache =responceCache.getAckCache();
                for (Map.Entry<Integer, Command> entry : ackCache.entrySet()){
                  Integer taskInstanceId = entry.getKey();
                  Command ackCommand = entry.getValue();
                  taskCallbackService.sendAck(taskInstanceId,ackCommand);
                }
            }

            if (!responceCache.getResponseCache().isEmpty()){
                Map<Integer,Command> responseCache =responceCache.getResponseCache();
                for (Map.Entry<Integer, Command> entry : responseCache.entrySet()){
                  Integer taskInstanceId = entry.getKey();
                  Command responseCommand = entry.getValue();
                  taskCallbackService.sendResult(taskInstanceId,responseCommand);
                }
            }
      }catch (Exception e){
            logger.warn("retry report task status error", e);
      }
    }
}Master与Worker的交互设计

Apache DolphinScheduler Master和Worker模块是两个独立的JVM进程,可以部署在不同的服务器上,Master与Worker的通信都是通过Netty实现RPC交互的,一共用到7种处理器。
模块处理器作用mastermasterTaskResponseProcessor处理TaskExecuteResponseCommand消息,将消息添加到TaskResponseService的任务响应队列中mastermasterTaskAckProcessor处理TaskExecuteAckCommand消息,将消息添加到TaskResponseService的任务响应队列中mastermasterTaskKillResponseProcessor处理TaskKillResponseCommand消息,并在日志中打印消息内容workerworkerTaskExecuteProcessor处理TaskExecuteRequestCommand消息,并发送TaskExecuteAckCommand到master,提交任务执行workerworkerTaskKillProcessor处理TaskKillRequestCommand消息,调用kill -9 pid杀死任务对应的进程,并向master发送TaskKillResponseCommand消息workerworkerDBTaskAckProcessor处理DBTaskAckCommand消息,针对执行成功的任务,从ResponseCache中删除workerworkerDBTaskResponseProcessor处理DBTaskResponseCommand消息,针对执行成功的任务,从ResponseCache中删除分发任务如何交互


master#TaskPriorityQueueConsumer

Master任务里有一个TaskPriorityQueueConsumer,会从TaskPriorityQueue里每次取3个Task分发给Worker执行,这里会创建TaskExecuteRequestCommand。
TaskPriorityQueueConsumer#run()

@Overridepublic void run() {    List failedDispatchTasks = new ArrayList();    while (Stopper.isRunning()){      try {            // 每一批次分发任务数量,master.dispatch.task.num = 3            int fetchTaskNum = masterConfig.getMasterDispatchTaskNumber();            failedDispatchTasks.clear();            for(int i = 0; i < fetchTaskNum; i++){                if(taskPriorityQueue.size()
页: [1]
查看完整版本: Apache DolphinScheduler-1.3.9源码分析(二)