在训练作业中使用checkpoint#

在机器学习模型训练过程中,往往需要较长的时间完成训练数据的迭代,实现模型的收敛,然而训练过程可能会因为各种原因中断,例如机器故障、网络问题、或是代码原因等。为了避免中断后需要重头开始训练,开发者通常会在训练过程中,定期将模型的状态保存为checkpoint文件,以便在训练中断后,能够从保存的checkpoint文件获取模型参数,优化器状态,训练步数等训练状态,恢复训练。

本文档介绍如何在PAI的训练作业中使用checkpoint。

准备工作#

我们需要首先安装PAI Python SDK以运行本示例。

!python -m pip install --upgrade alipai

SDK 需要配置访问阿里云服务需要的 AccessKey,以及当前使用的工作空间和OSS Bucket。在 PAI Python SDK 安装之后,通过在 命令行终端 中执行以下命令,按照引导配置密钥,工作空间等信息。

# 以下命令,请在 命令行终端 中执行.

python -m pai.toolkit.config

我们可以通过以下代码验证当前的配置。

使用checkpoint保存和恢复训练作业#

当使用SDK提供的pai.estimator.Estimator 提交训练作业时,训练作业默认会挂载用户的OSS Bucket路径到训练作业的/ml/output/checkpoints目录。训练代码可以将checkpoint文件写出到对应的路径,从而保存到OSS中。提交训练作业之后,可以通过 estimator.checkpoints_data() 方法可以获取checkpoints保存的OSS路径。

当需要使用已有的checkpoint时,用户可以通过 checkpoints_path 参数指定一个OSS Bucket路径,PAI会将该路径挂载到训练作业的/ml/output/checkpoints目录,训练作业可以通过读取对应数据路径下的checkpoint文件来恢复训练。


from pai.estimator import Estimator


# 1. 使用默认的checkpoints路径保存模型的checkpoints
est = Estimator(
	image_uri="<TrainingImageUri>",
	command="python train.py",
)

# 训练作业默认会挂载一个OSS Bucket路径到 /ml/output/checkpoints
# 用户训练代码可以通过写文件到 /ml/output/checkpoints 保存checkpoint
est.fit()

# 查看训练作业的checkpoints路径
print(est.checkpoints_data())

# 2. 使用其他训练作业产出的checkpoints恢复训练
est_load = Estimator(
	image_uri="<TrainingImageUri>",
	command="python train.py",
	# 指定使用上一个训练作业输出的checkpoints.
	checkpoints_path=est.checkpoints_data(),
)

# 训练代码从 /ml/output/checkpoints 中加载checkpoint
est_load.fit()

在PyTorch中使用checkpoint#

在PyTorch中,通常使用torch.save方法将模型的参数、优化器的状态、训练进度等信息,以字典的形式作为checkpoint进行保存。保存的checkpoint文件可以通过 torch.load 进行加载。PyTorch提供了如何在训练中保存和加载checkpoint的教程:Save And Loading A General Checkpoint In PyTorch

我们将基于PyTorch的示例教程,演示如何在PAI的训练作业中使用checkpoint。

训练作业使用的代码如下:

  1. 在训练开始之前,通过 /ml/output/checkpoints/ 路径加载checkpoint获取初始化模型参数,优化器,以及训练进度。

  2. 基于checkpoint的状态信息训练模型,在训练过程中,定期保存checkpoint到 /ml/output/checkpoints/ 路径。

!mkdir -p train_src
%%writefile train_src/train.py
# Additional information
import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F


EPOCH = 5
CHECKPOINT_NAME = "checkpoint.pt"
LOSS = 0.4

# Define a custom mock dataset
class RandomDataset(Dataset):
    def __init__(self, num_samples=1000):
        self.num_samples = num_samples

    def __len__(self):
        return self.num_samples

    def __getitem__(self, idx):
        x = torch.randn(10)  # Generating random input tensor
        y = torch.randint(0, 2, (1,)).item()  # Generating random target label (0 or 1)
        return x, y


# Define your model
class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc = nn.Linear(10, 2)
    
    def forward(self, x):
        return self.fc(x)


net = MyModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001)
start_epoch = 0

def load_checkpoint():
    """Load checkpoint if exists."""
    global net, optimizer, start_epoch, LOSS
    checkpoint_dir = os.environ.get("PAI_OUTPUT_CHECKPOINTS")
    if not checkpoint_dir:
        return
    checkpoint_path = os.path.join(checkpoint_dir, CHECKPOINT_NAME)
    if not os.path.exists(checkpoint_path):
        return
    data = torch.load(checkpoint_path)

    net.load_state_dict(data["model_state_dict"])
    optimizer.load_state_dict(data["optimizer_state_dict"])
    start_epoch = data["epoch"]


def save_checkpoint(epoch):
    global net, optimizer, start_epoch, LOSS
    checkpoint_dir = os.environ.get("PAI_OUTPUT_CHECKPOINTS")
    if not checkpoint_dir:
        return
    checkpoint_path = os.path.join(checkpoint_dir, CHECKPOINT_NAME)
    torch.save({
        'epoch': epoch + 1,
        'model_state_dict': net.state_dict(),
        'optimizer_state_dict': optimizer.state_dict(),
    }, checkpoint_path)


def parse_args():
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--epochs", type=int, default=10)
    args = parser.parse_args()
    return args


def train():
    args = parse_args()
    load_checkpoint()
    batch_size = 4
    dataloader = DataLoader(RandomDataset(), batch_size=batch_size, shuffle=True)
    num_epochs = args.epochs
    print(num_epochs)
    for epoch in range(start_epoch, num_epochs):
        net.train()
        for i, (inputs, targets) in enumerate(dataloader):
            # Forward pass
            outputs = net(inputs)
            loss = criterion(outputs, targets)
            
            # Backward pass and optimization
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            
            # Print training progress
            if (i+1) % 10 == 0:
                print(f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(dataloader)}], Loss: {loss.item()}')
        
        # Save checkpoint
        save_checkpoint(epoch=epoch)
    # save the model
    torch.save(net.state_dict(), os.path.join(os.environ.get("PAI_OUTPUT_MODEL", "."), "model.pt"))
    


if __name__ == "__main__":
    train()

我们将以上的代码提交到PAI执行,训练作业最终提供挂载的OSS路径保存模型。

from pai.estimator import Estimator
from pai.image import retrieve


epochs = 10


# 训练作业默认会挂载一个OSS Bucket路径到 /ml/output/checkpoints/
est = Estimator(
    command="python train.py --epochs {}".format(epochs),
    source_dir="./train_src/",
    image_uri=retrieve("PyTorch", "latest").image_uri,
    instance_type="ecs.c6.large",
    base_job_name="torch_checkpoint",
)

est.fit()
/Users/liangquan/code/pypai/pai/common/oss_utils.py:13: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
  from tqdm.autonotebook import tqdm
View the job detail by accessing the console URI: https://pai.console.aliyun.com/?regionId=cn-hangzhou&workspaceId=58670#/training/jobs/train1u1it512gqg
TrainingJob launch starting
MAX_PARALLELISM=0
C_INCLUDE_PATH=/home/pai/include
KUBERNETES_PORT=tcp://10.192.0.1:443
KUBERNETES_SERVICE_PORT=443
LANGUAGE=en_US.UTF-8
PIP_TRUSTED_HOST=mirrors.cloud.aliyuncs.com
MASTER_ADDR=train1u1it512gqg-master-0
HOSTNAME=train1u1it512gqg-master-0
LD_LIBRARY_PATH=:/lib/x86_64-linux-gnu:/home/pai/lib:/home/pai/jre/lib/amd64/server
MASTER_PORT=23456
HOME=/root
PAI_USER_ARGS=
PYTHONUNBUFFERED=0
PAI_OUTPUT_CHECKPOINTS=/ml/output/checkpoints/
PAI_CONFIG_DIR=/ml/input/config/
WORLD_SIZE=1
REGION_ID=cn-hangzhou
CPLUS_INCLUDE_PATH=/home/pai/include
RANK=0
OPAL_PREFIX=/home/pai/
PAI_TRAINING_JOB_ID=train1u1it512gqg
TERM=xterm-color
KUBERNETES_PORT_443_TCP_ADDR=10.192.0.1
PAI_OUTPUT_MODEL=/ml/output/model/
ELASTIC_TRAINING_ENABLED=false
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/pai/bin:/home/pai/hadoop/bin
PIP_INDEX_URL=https://mirrors.cloud.aliyuncs.com/pypi/simple
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=en_US.UTF-8
aliyun_logs_containerType_tags=containerType=Algorithm
PAI_TRAINING_USE_ECI=true
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.192.0.1:443
ELASTIC_INFERENCE_ENABLED=false
LC_ALL=en_US.UTF-8
JAVA_HOME=/home/pai
KUBERNETES_SERVICE_HOST=10.192.0.1
PWD=/
PAI_HPS={}
TZ=UTC
HADOOP_HOME=/home/pai/hadoop
PAI_OUTPUT_LOGS=/ml/output/logs/
aliyun_logs_trainingJobId_tags=trainingJobId=train1u1it512gqg
PAI_ODPS_CREDENTIAL=/ml/input/credential/odps.json
PAI_WORKING_DIR=/ml/usercode/
Change to Working Directory, /ml/usercode/
User program launching
-----------------------------------------------------------------
10
Epoch [1/10], Step [10/250], Loss: 0.3664854168891907
Epoch [1/10], Step [20/250], Loss: 0.5867650508880615
Epoch [1/10], Step [30/250], Loss: 0.8810225129127502
Epoch [1/10], Step [40/250], Loss: 1.3596220016479492
Epoch [1/10], Step [50/250], Loss: 1.0757191181182861
Epoch [1/10], Step [60/250], Loss: 0.5261836051940918
Epoch [1/10], Step [70/250], Loss: 1.0891999006271362
Epoch [1/10], Step [80/250], Loss: 1.2425217628479004
Epoch [1/10], Step [90/250], Loss: 0.7928518652915955
Epoch [1/10], Step [100/250], Loss: 0.500701367855072
Epoch [1/10], Step [110/250], Loss: 1.1105762720108032
Epoch [1/10], Step [120/250], Loss: 0.7642831802368164
Epoch [1/10], Step [130/250], Loss: 0.9435116052627563
Epoch [1/10], Step [140/250], Loss: 0.4632255434989929
Epoch [1/10], Step [150/250], Loss: 0.8282555937767029
Epoch [1/10], Step [160/250], Loss: 0.5644117593765259
Epoch [1/10], Step [170/250], Loss: 0.8821360468864441
Epoch [1/10], Step [180/250], Loss: 0.6495410799980164
Epoch [1/10], Step [190/250], Loss: 0.6814499497413635
Epoch [1/10], Step [200/250], Loss: 1.1818656921386719
Epoch [1/10], Step [210/250], Loss: 0.4218548536300659
Epoch [1/10], Step [220/250], Loss: 0.5892952680587769
Epoch [1/10], Step [230/250], Loss: 0.8104468584060669
Epoch [1/10], Step [240/250], Loss: 0.3310832977294922
Epoch [1/10], Step [250/250], Loss: 1.0296210050582886
Epoch [2/10], Step [10/250], Loss: 0.747037947177887
Epoch [2/10], Step [20/250], Loss: 1.0555682182312012
Epoch [2/10], Step [30/250], Loss: 0.5005624294281006
Epoch [2/10], Step [40/250], Loss: 0.6007864475250244
Epoch [2/10], Step [50/250], Loss: 0.8172819018363953
Epoch [2/10], Step [60/250], Loss: 0.7322960495948792
Epoch [2/10], Step [70/250], Loss: 0.6178841590881348
Epoch [2/10], Step [80/250], Loss: 0.9776118993759155
Epoch [2/10], Step [90/250], Loss: 0.8088865876197815
Epoch [2/10], Step [100/250], Loss: 0.7169486284255981
Epoch [2/10], Step [110/250], Loss: 0.8003190159797668
Epoch [2/10], Step [120/250], Loss: 0.9178279638290405
Epoch [2/10], Step [130/250], Loss: 0.5217956900596619
Epoch [2/10], Step [140/250], Loss: 1.2751939296722412
Epoch [2/10], Step [150/250], Loss: 1.1024904251098633
Epoch [2/10], Step [160/250], Loss: 0.6336060762405396
Epoch [2/10], Step [170/250], Loss: 0.799022376537323
Epoch [2/10], Step [180/250], Loss: 0.7938567996025085
Epoch [2/10], Step [190/250], Loss: 1.060591220855713
Epoch [2/10], Step [200/250], Loss: 0.9365970492362976
Epoch [2/10], Step [210/250], Loss: 0.6945515871047974
Epoch [2/10], Step [220/250], Loss: 0.4772261381149292
Epoch [2/10], Step [230/250], Loss: 1.0332412719726562
Epoch [2/10], Step [240/250], Loss: 0.7284632325172424
Epoch [2/10], Step [250/250], Loss: 0.4485410451889038
Epoch [3/10], Step [10/250], Loss: 0.7845520377159119
Epoch [3/10], Step [20/250], Loss: 0.5619648694992065
Epoch [3/10], Step [30/250], Loss: 0.725273609161377
Epoch [3/10], Step [40/250], Loss: 0.7783026695251465
Epoch [3/10], Step [50/250], Loss: 0.5168777704238892
Epoch [3/10], Step [60/250], Loss: 0.67060387134552
Epoch [3/10], Step [70/250], Loss: 0.9300781488418579
Epoch [3/10], Step [80/250], Loss: 0.6534505486488342
Epoch [3/10], Step [90/250], Loss: 0.557340681552887
Epoch [3/10], Step [100/250], Loss: 0.667724609375
Epoch [3/10], Step [110/250], Loss: 0.5125826001167297
Epoch [3/10], Step [120/250], Loss: 0.4494149088859558
Epoch [3/10], Step [130/250], Loss: 0.6902559995651245
Epoch [3/10], Step [140/250], Loss: 0.5450549125671387
Epoch [3/10], Step [150/250], Loss: 1.0632681846618652
Epoch [3/10], Step [160/250], Loss: 0.7964761257171631
Epoch [3/10], Step [170/250], Loss: 0.5218536257743835
Epoch [3/10], Step [180/250], Loss: 0.6972622275352478
Epoch [3/10], Step [190/250], Loss: 0.7963941097259521
Epoch [3/10], Step [200/250], Loss: 0.5798731446266174
Epoch [3/10], Step [210/250], Loss: 0.7930802702903748
Epoch [3/10], Step [220/250], Loss: 0.7618649005889893
Epoch [3/10], Step [230/250], Loss: 0.9831617474555969
Epoch [3/10], Step [240/250], Loss: 0.7935497164726257
Epoch [3/10], Step [250/250], Loss: 0.9747794270515442
Epoch [4/10], Step [10/250], Loss: 0.6432996392250061
Epoch [4/10], Step [20/250], Loss: 0.6515889167785645
Epoch [4/10], Step [30/250], Loss: 0.8191264867782593
Epoch [4/10], Step [40/250], Loss: 0.5717310905456543
Epoch [4/10], Step [50/250], Loss: 1.0365064144134521
Epoch [4/10], Step [60/250], Loss: 0.7181562185287476
Epoch [4/10], Step [70/250], Loss: 0.6014276146888733
Epoch [4/10], Step [80/250], Loss: 0.8743482232093811
Epoch [4/10], Step [90/250], Loss: 0.5963127613067627
Epoch [4/10], Step [100/250], Loss: 0.7012943029403687
Epoch [4/10], Step [110/250], Loss: 0.6271654367446899
Epoch [4/10], Step [120/250], Loss: 0.646144449710846
Epoch [4/10], Step [130/250], Loss: 0.5112266540527344
Epoch [4/10], Step [140/250], Loss: 0.8657329678535461
Epoch [4/10], Step [150/250], Loss: 0.677897572517395
Epoch [4/10], Step [160/250], Loss: 0.798669695854187
Epoch [4/10], Step [170/250], Loss: 0.805213451385498
Epoch [4/10], Step [180/250], Loss: 0.7744658589363098
Epoch [4/10], Step [190/250], Loss: 0.4748728275299072
Epoch [4/10], Step [200/250], Loss: 0.6623726487159729
Epoch [4/10], Step [210/250], Loss: 0.6851851940155029
Epoch [4/10], Step [220/250], Loss: 0.5917701721191406
Epoch [4/10], Step [230/250], Loss: 0.586968719959259
Epoch [4/10], Step [240/250], Loss: 0.758073091506958
Epoch [4/10], Step [250/250], Loss: 0.7908360958099365
Epoch [5/10], Step [10/250], Loss: 0.747495174407959
Epoch [5/10], Step [20/250], Loss: 0.7880417108535767
Epoch [5/10], Step [30/250], Loss: 1.4239259958267212
Epoch [5/10], Step [40/250], Loss: 0.709957480430603
Epoch [5/10], Step [50/250], Loss: 0.45279955863952637
Epoch [5/10], Step [60/250], Loss: 0.6855078935623169
Epoch [5/10], Step [70/250], Loss: 0.7050631046295166
Epoch [5/10], Step [80/250], Loss: 0.8256967067718506
Epoch [5/10], Step [90/250], Loss: 0.9627029895782471
Epoch [5/10], Step [100/250], Loss: 0.7069070339202881
Epoch [5/10], Step [110/250], Loss: 0.6772119998931885
Epoch [5/10], Step [120/250], Loss: 0.5547316670417786
Epoch [5/10], Step [130/250], Loss: 0.4749568998813629
Epoch [5/10], Step [140/250], Loss: 0.5910231471061707
Epoch [5/10], Step [150/250], Loss: 0.5789163112640381
Epoch [5/10], Step [160/250], Loss: 0.994613766670227
Epoch [5/10], Step [170/250], Loss: 0.7664419412612915
Epoch [5/10], Step [180/250], Loss: 0.7812412977218628
Epoch [5/10], Step [190/250], Loss: 0.932634174823761
Epoch [5/10], Step [200/250], Loss: 0.4732060134410858
Epoch [5/10], Step [210/250], Loss: 0.6712639927864075
Epoch [5/10], Step [220/250], Loss: 0.7019771337509155
Epoch [5/10], Step [230/250], Loss: 0.668921709060669
Epoch [5/10], Step [240/250], Loss: 0.5486156344413757
Epoch [5/10], Step [250/250], Loss: 0.8131189346313477
Epoch [6/10], Step [10/250], Loss: 0.5800281167030334
Epoch [6/10], Step [20/250], Loss: 0.9032570719718933
Epoch [6/10], Step [30/250], Loss: 0.6829659938812256
Epoch [6/10], Step [40/250], Loss: 0.577970027923584
Epoch [6/10], Step [50/250], Loss: 0.9745671153068542
Epoch [6/10], Step [60/250], Loss: 0.6292040348052979
Epoch [6/10], Step [70/250], Loss: 0.9189562201499939
Epoch [6/10], Step [80/250], Loss: 1.0687212944030762
Epoch [6/10], Step [90/250], Loss: 0.6210573315620422
Epoch [6/10], Step [100/250], Loss: 0.7758654356002808
Epoch [6/10], Step [110/250], Loss: 1.055539846420288
Epoch [6/10], Step [120/250], Loss: 0.7991855144500732
Epoch [6/10], Step [130/250], Loss: 0.8390480279922485
Epoch [6/10], Step [140/250], Loss: 0.5641282200813293
Epoch [6/10], Step [150/250], Loss: 0.5416208505630493
Epoch [6/10], Step [160/250], Loss: 0.8556939363479614
Epoch [6/10], Step [170/250], Loss: 0.8848042488098145
Epoch [6/10], Step [180/250], Loss: 0.6585526466369629
Epoch [6/10], Step [190/250], Loss: 0.5264347791671753
Epoch [6/10], Step [200/250], Loss: 0.7451325058937073
Epoch [6/10], Step [210/250], Loss: 0.8498039841651917
Epoch [6/10], Step [220/250], Loss: 0.9514821767807007
Epoch [6/10], Step [230/250], Loss: 0.5831080675125122
Epoch [6/10], Step [240/250], Loss: 0.7323013544082642
Epoch [6/10], Step [250/250], Loss: 0.799047589302063
Epoch [7/10], Step [10/250], Loss: 0.7431624531745911
Epoch [7/10], Step [20/250], Loss: 0.7462856769561768
Epoch [7/10], Step [30/250], Loss: 0.7637103796005249
Epoch [7/10], Step [40/250], Loss: 0.7512863874435425
Epoch [7/10], Step [50/250], Loss: 0.8934370279312134
Epoch [7/10], Step [60/250], Loss: 0.6657339334487915
Epoch [7/10], Step [70/250], Loss: 0.7996265292167664
Epoch [7/10], Step [80/250], Loss: 0.7883811593055725
Epoch [7/10], Step [90/250], Loss: 0.7327611446380615
Epoch [7/10], Step [100/250], Loss: 0.7103905081748962
Epoch [7/10], Step [110/250], Loss: 0.8145009875297546
Epoch [7/10], Step [120/250], Loss: 0.6999544501304626
Epoch [7/10], Step [130/250], Loss: 0.6132965087890625
Epoch [7/10], Step [140/250], Loss: 0.8219666481018066
Epoch [7/10], Step [150/250], Loss: 0.573877215385437
Epoch [7/10], Step [160/250], Loss: 0.864593505859375
Epoch [7/10], Step [170/250], Loss: 0.7187140583992004
Epoch [7/10], Step [180/250], Loss: 0.601334810256958
Epoch [7/10], Step [190/250], Loss: 0.6193158626556396
Epoch [7/10], Step [200/250], Loss: 0.7600311040878296
Epoch [7/10], Step [210/250], Loss: 0.6659085154533386
Epoch [7/10], Step [220/250], Loss: 0.6364413499832153
Epoch [7/10], Step [230/250], Loss: 0.878304123878479
Epoch [7/10], Step [240/250], Loss: 0.7139410972595215
Epoch [7/10], Step [250/250], Loss: 0.6852972507476807
Epoch [8/10], Step [10/250], Loss: 1.0263853073120117
Epoch [8/10], Step [20/250], Loss: 0.7559791803359985
Epoch [8/10], Step [30/250], Loss: 0.6709325313568115
Epoch [8/10], Step [40/250], Loss: 0.5146634578704834
Epoch [8/10], Step [50/250], Loss: 0.6418485641479492
Epoch [8/10], Step [60/250], Loss: 0.72318035364151
Epoch [8/10], Step [70/250], Loss: 0.7116968631744385
Epoch [8/10], Step [80/250], Loss: 0.7035868763923645
Epoch [8/10], Step [90/250], Loss: 0.6085933446884155
Epoch [8/10], Step [100/250], Loss: 0.5128545761108398
Epoch [8/10], Step [110/250], Loss: 0.6380510330200195
Epoch [8/10], Step [120/250], Loss: 0.4963105320930481
Epoch [8/10], Step [130/250], Loss: 0.6693160533905029
Epoch [8/10], Step [140/250], Loss: 0.6602588891983032
Epoch [8/10], Step [150/250], Loss: 0.8440876007080078
Epoch [8/10], Step [160/250], Loss: 0.7596740126609802
Epoch [8/10], Step [170/250], Loss: 0.695992112159729
Epoch [8/10], Step [180/250], Loss: 0.6737014651298523
Epoch [8/10], Step [190/250], Loss: 0.6722623705863953
Epoch [8/10], Step [200/250], Loss: 0.5857406854629517
Epoch [8/10], Step [210/250], Loss: 0.9563039541244507
Epoch [8/10], Step [220/250], Loss: 0.7375826835632324
Epoch [8/10], Step [230/250], Loss: 0.8751094341278076
Epoch [8/10], Step [240/250], Loss: 0.7180076837539673
Epoch [8/10], Step [250/250], Loss: 0.6384711861610413
Epoch [9/10], Step [10/250], Loss: 0.6789698004722595
Epoch [9/10], Step [20/250], Loss: 0.6645065546035767
Epoch [9/10], Step [30/250], Loss: 0.6996726989746094
Epoch [9/10], Step [40/250], Loss: 0.7402397394180298
Epoch [9/10], Step [50/250], Loss: 0.6388964653015137
Epoch [9/10], Step [60/250], Loss: 0.9401450753211975
Epoch [9/10], Step [70/250], Loss: 0.6708970665931702
Epoch [9/10], Step [80/250], Loss: 0.728550136089325
Epoch [9/10], Step [90/250], Loss: 0.7362596988677979
Epoch [9/10], Step [100/250], Loss: 0.7750495672225952
Epoch [9/10], Step [110/250], Loss: 0.807244062423706
Epoch [9/10], Step [120/250], Loss: 0.754521369934082
Epoch [9/10], Step [130/250], Loss: 0.5469345450401306
Epoch [9/10], Step [140/250], Loss: 0.8965460062026978
Epoch [9/10], Step [150/250], Loss: 0.7952369451522827
Epoch [9/10], Step [160/250], Loss: 0.6263578534126282
Epoch [9/10], Step [170/250], Loss: 0.5788871049880981
Epoch [9/10], Step [180/250], Loss: 0.7363749146461487
Epoch [9/10], Step [190/250], Loss: 0.7322844862937927
Epoch [9/10], Step [200/250], Loss: 0.6707043051719666
Epoch [9/10], Step [210/250], Loss: 0.7251213192939758
Epoch [9/10], Step [220/250], Loss: 0.6435517072677612
Epoch [9/10], Step [230/250], Loss: 0.534774124622345
Epoch [9/10], Step [240/250], Loss: 0.6989405751228333
Epoch [9/10], Step [250/250], Loss: 0.7413943409919739
Epoch [10/10], Step [10/250], Loss: 0.6014090776443481
Epoch [10/10], Step [20/250], Loss: 0.8173813819885254
Epoch [10/10], Step [30/250], Loss: 0.8984671235084534
Epoch [10/10], Step [40/250], Loss: 0.6354056000709534
Epoch [10/10], Step [50/250], Loss: 0.7964866757392883
Epoch [10/10], Step [60/250], Loss: 0.7849454879760742
Epoch [10/10], Step [70/250], Loss: 0.5637381076812744
Epoch [10/10], Step [80/250], Loss: 0.7669687271118164
Epoch [10/10], Step [90/250], Loss: 0.6140038371086121
Epoch [10/10], Step [100/250], Loss: 0.7134058475494385
Epoch [10/10], Step [110/250], Loss: 0.6768066883087158
Epoch [10/10], Step [120/250], Loss: 0.6304113268852234
Epoch [10/10], Step [130/250], Loss: 0.7426990866661072
Epoch [10/10], Step [140/250], Loss: 0.7469097971916199
Epoch [10/10], Step [150/250], Loss: 0.7591947913169861
Epoch [10/10], Step [160/250], Loss: 0.7327935099601746
Epoch [10/10], Step [170/250], Loss: 0.8590223789215088
Epoch [10/10], Step [180/250], Loss: 0.6994909644126892
Epoch [10/10], Step [190/250], Loss: 0.8262240886688232
Epoch [10/10], Step [200/250], Loss: 0.6071692109107971
Epoch [10/10], Step [210/250], Loss: 0.915013313293457
Epoch [10/10], Step [220/250], Loss: 0.8758894205093384
Epoch [10/10], Step [230/250], Loss: 0.6473208665847778
Epoch [10/10], Step [240/250], Loss: 0.6843898296356201
Epoch [10/10], Step [250/250], Loss: 0.6645953059196472

Training job (train1u1it512gqg) succeeded, you can check the logs/metrics/output in  the console:
https://pai.console.aliyun.com/?regionId=cn-hangzhou&workspaceId=58670#/training/jobs/train1u1it512gqg
# 训练作业的checkpoints目录
print(est.checkpoints_data())

以上训练作业对训练数据做了10次迭代,通过使用checkpoint,我们可以在原先模型的基础上继续训练,例如使用训练数据继续迭代20次迭代。

from pai.estimator import Estimator
from pai.image import retrieve


# 训练数据的总迭代次数为30
epochs = 30

resume_est = Estimator(
    command="python train.py --epochs {}".format(epochs),
    source_dir="./train_src/",
    image_uri=retrieve("PyTorch", "latest").image_uri,
    instance_type="ecs.c6.large",
    # 使用上一个训练作业的checkpoints,相应的OSS Bucket路径会被挂载到 /ml/output/checkpoints 路径下
    checkpoints_path=est.checkpoints_data(),
    base_job_name="torch_resume_checkpoint",
)

resume_est.fit()
View the job detail by accessing the console URI: https://pai.console.aliyun.com/?regionId=cn-hangzhou&workspaceId=58670#/training/jobs/trainu90lc57j1vm
TrainingJob launch starting
MAX_PARALLELISM=0
C_INCLUDE_PATH=/home/pai/include
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.192.0.1:443
LANGUAGE=en_US.UTF-8
PIP_TRUSTED_HOST=mirrors.cloud.aliyuncs.com
MASTER_ADDR=trainu90lc57j1vm-master-0
HOSTNAME=trainu90lc57j1vm-master-0
LD_LIBRARY_PATH=:/lib/x86_64-linux-gnu:/home/pai/lib:/home/pai/jre/lib/amd64/server
MASTER_PORT=23456
HOME=/root
PAI_USER_ARGS=
PYTHONUNBUFFERED=0
PAI_OUTPUT_CHECKPOINTS=/ml/output/checkpoints/
PAI_CONFIG_DIR=/ml/input/config/
WORLD_SIZE=1
REGION_ID=cn-hangzhou
CPLUS_INCLUDE_PATH=/home/pai/include
RANK=0
OPAL_PREFIX=/home/pai/
PAI_TRAINING_JOB_ID=trainu90lc57j1vm
TERM=xterm-color
KUBERNETES_PORT_443_TCP_ADDR=10.192.0.1
PAI_OUTPUT_MODEL=/ml/output/model/
ELASTIC_TRAINING_ENABLED=false
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/pai/bin:/home/pai/hadoop/bin
PIP_INDEX_URL=https://mirrors.cloud.aliyuncs.com/pypi/simple
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=en_US.UTF-8
PAI_TRAINING_USE_ECI=true
aliyun_logs_containerType_tags=containerType=Algorithm
KUBERNETES_PORT_443_TCP=tcp://10.192.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
ELASTIC_INFERENCE_ENABLED=false
LC_ALL=en_US.UTF-8
JAVA_HOME=/home/pai
KUBERNETES_SERVICE_HOST=10.192.0.1
PWD=/
PAI_HPS={}
TZ=UTC
HADOOP_HOME=/home/pai/hadoop
PAI_OUTPUT_LOGS=/ml/output/logs/
aliyun_logs_trainingJobId_tags=trainingJobId=trainu90lc57j1vm
PAI_ODPS_CREDENTIAL=/ml/input/credential/odps.json
PAI_WORKING_DIR=/ml/usercode/
Change to Working Directory, /ml/usercode/
User program launching
-----------------------------------------------------------------
30
Epoch [11/30], Step [10/250], Loss: 0.678845226764679
Epoch [11/30], Step [20/250], Loss: 0.6292213201522827
Epoch [11/30], Step [30/250], Loss: 0.6856911182403564
Epoch [11/30], Step [40/250], Loss: 0.6147192716598511
Epoch [11/30], Step [50/250], Loss: 0.7846511602401733
Epoch [11/30], Step [60/250], Loss: 0.6719473004341125
Epoch [11/30], Step [70/250], Loss: 0.8227031826972961
Epoch [11/30], Step [80/250], Loss: 0.7861220836639404
Epoch [11/30], Step [90/250], Loss: 0.7436649203300476
Epoch [11/30], Step [100/250], Loss: 0.8053247928619385
Epoch [11/30], Step [110/250], Loss: 0.716484546661377
Epoch [11/30], Step [120/250], Loss: 0.6527263522148132
Epoch [11/30], Step [130/250], Loss: 0.7980918884277344
Epoch [11/30], Step [140/250], Loss: 0.6761615872383118
Epoch [11/30], Step [150/250], Loss: 0.8030520081520081
Epoch [11/30], Step [160/250], Loss: 0.6580255627632141
Epoch [11/30], Step [170/250], Loss: 0.7671869993209839
Epoch [11/30], Step [180/250], Loss: 0.6622000932693481
Epoch [11/30], Step [190/250], Loss: 0.747247576713562
Epoch [11/30], Step [200/250], Loss: 0.705307126045227
Epoch [11/30], Step [210/250], Loss: 0.6516950130462646
Epoch [11/30], Step [220/250], Loss: 0.6065223217010498
Epoch [11/30], Step [230/250], Loss: 0.6885045766830444
Epoch [11/30], Step [240/250], Loss: 0.7392936944961548
Epoch [11/30], Step [250/250], Loss: 0.6803852319717407
Epoch [12/30], Step [10/250], Loss: 0.8813486695289612
Epoch [12/30], Step [20/250], Loss: 0.7780698537826538
Epoch [12/30], Step [30/250], Loss: 0.7158650159835815
Epoch [12/30], Step [40/250], Loss: 0.5826153755187988
Epoch [12/30], Step [50/250], Loss: 0.6013429760932922
Epoch [12/30], Step [60/250], Loss: 0.7084614634513855
Epoch [12/30], Step [70/250], Loss: 0.6825753450393677
Epoch [12/30], Step [80/250], Loss: 0.6074261665344238
Epoch [12/30], Step [90/250], Loss: 0.8619674444198608
Epoch [12/30], Step [100/250], Loss: 0.6013283729553223
Epoch [12/30], Step [110/250], Loss: 0.6808617115020752
Epoch [12/30], Step [120/250], Loss: 0.6765388250350952
Epoch [12/30], Step [130/250], Loss: 0.7072106599807739
Epoch [12/30], Step [140/250], Loss: 0.6905199289321899
Epoch [12/30], Step [150/250], Loss: 0.6942532062530518
Epoch [12/30], Step [160/250], Loss: 0.7181805968284607
Epoch [12/30], Step [170/250], Loss: 0.6357207298278809
Epoch [12/30], Step [180/250], Loss: 0.6719130277633667
Epoch [12/30], Step [190/250], Loss: 0.7218160629272461
Epoch [12/30], Step [200/250], Loss: 0.7158771753311157
Epoch [12/30], Step [210/250], Loss: 0.7585588693618774
Epoch [12/30], Step [220/250], Loss: 0.8121419548988342
Epoch [12/30], Step [230/250], Loss: 0.7744668126106262
Epoch [12/30], Step [240/250], Loss: 0.7164073586463928
Epoch [12/30], Step [250/250], Loss: 0.5488151907920837
Epoch [13/30], Step [10/250], Loss: 0.7662173509597778
Epoch [13/30], Step [20/250], Loss: 0.7802825570106506
Epoch [13/30], Step [30/250], Loss: 0.7456352114677429
Epoch [13/30], Step [40/250], Loss: 0.6143842935562134
Epoch [13/30], Step [50/250], Loss: 0.7393404245376587
Epoch [13/30], Step [60/250], Loss: 0.6536136865615845
Epoch [13/30], Step [70/250], Loss: 0.7647539377212524
Epoch [13/30], Step [80/250], Loss: 0.6415259838104248
Epoch [13/30], Step [90/250], Loss: 0.8065975904464722
Epoch [13/30], Step [100/250], Loss: 0.654565155506134
Epoch [13/30], Step [110/250], Loss: 0.6512014865875244
Epoch [13/30], Step [120/250], Loss: 0.6851429343223572
Epoch [13/30], Step [130/250], Loss: 0.7639355659484863
Epoch [13/30], Step [140/250], Loss: 0.7886079549789429
Epoch [13/30], Step [150/250], Loss: 0.677024245262146
Epoch [13/30], Step [160/250], Loss: 0.6869807243347168
Epoch [13/30], Step [170/250], Loss: 0.7076682448387146
Epoch [13/30], Step [180/250], Loss: 0.6720783710479736
Epoch [13/30], Step [190/250], Loss: 0.6578226685523987
Epoch [13/30], Step [200/250], Loss: 0.6924010515213013
Epoch [13/30], Step [210/250], Loss: 0.8084946870803833
Epoch [13/30], Step [220/250], Loss: 0.7015032768249512
Epoch [13/30], Step [230/250], Loss: 0.6897311210632324
Epoch [13/30], Step [240/250], Loss: 0.7233715653419495
Epoch [13/30], Step [250/250], Loss: 0.82469242811203
Epoch [14/30], Step [10/250], Loss: 0.7118442058563232
Epoch [14/30], Step [20/250], Loss: 0.66881263256073
Epoch [14/30], Step [30/250], Loss: 0.6966590881347656
Epoch [14/30], Step [40/250], Loss: 0.8390185236930847
Epoch [14/30], Step [50/250], Loss: 0.7978378534317017
Epoch [14/30], Step [60/250], Loss: 0.6207278966903687
Epoch [14/30], Step [70/250], Loss: 0.6512827277183533
Epoch [14/30], Step [80/250], Loss: 0.6850301027297974
Epoch [14/30], Step [90/250], Loss: 0.628646194934845
Epoch [14/30], Step [100/250], Loss: 0.6093996167182922
Epoch [14/30], Step [110/250], Loss: 0.7588788866996765
Epoch [14/30], Step [120/250], Loss: 0.6795099377632141
Epoch [14/30], Step [130/250], Loss: 0.6357916593551636
Epoch [14/30], Step [140/250], Loss: 0.7358158826828003
Epoch [14/30], Step [150/250], Loss: 0.6896149516105652
Epoch [14/30], Step [160/250], Loss: 0.6862155199050903
Epoch [14/30], Step [170/250], Loss: 0.659408688545227
Epoch [14/30], Step [180/250], Loss: 0.717597246170044
Epoch [14/30], Step [190/250], Loss: 0.6779205203056335
Epoch [14/30], Step [200/250], Loss: 0.6569654941558838
Epoch [14/30], Step [210/250], Loss: 0.6521044373512268
Epoch [14/30], Step [220/250], Loss: 0.5803452134132385
Epoch [14/30], Step [230/250], Loss: 0.6112836599349976
Epoch [14/30], Step [240/250], Loss: 0.6311125755310059
Epoch [14/30], Step [250/250], Loss: 0.6427040696144104
Epoch [15/30], Step [10/250], Loss: 0.7193827629089355
Epoch [15/30], Step [20/250], Loss: 0.6781796216964722
Epoch [15/30], Step [30/250], Loss: 0.7042354345321655
Epoch [15/30], Step [40/250], Loss: 0.6776638627052307
Epoch [15/30], Step [50/250], Loss: 0.6593765020370483
Epoch [15/30], Step [60/250], Loss: 0.6749820113182068
Epoch [15/30], Step [70/250], Loss: 0.6199281811714172
Epoch [15/30], Step [80/250], Loss: 0.6898410320281982
Epoch [15/30], Step [90/250], Loss: 0.6938673257827759
Epoch [15/30], Step [100/250], Loss: 0.6369883418083191
Epoch [15/30], Step [110/250], Loss: 0.6758348345756531
Epoch [15/30], Step [120/250], Loss: 0.7379288673400879
Epoch [15/30], Step [130/250], Loss: 0.6447997689247131
Epoch [15/30], Step [140/250], Loss: 0.6910532712936401
Epoch [15/30], Step [150/250], Loss: 0.7426170110702515
Epoch [15/30], Step [160/250], Loss: 0.6422319412231445
Epoch [15/30], Step [170/250], Loss: 0.5789802670478821
Epoch [15/30], Step [180/250], Loss: 0.7434327602386475
Epoch [15/30], Step [190/250], Loss: 0.6754781007766724
Epoch [15/30], Step [200/250], Loss: 0.5865523815155029
Epoch [15/30], Step [210/250], Loss: 0.6548283696174622
Epoch [15/30], Step [220/250], Loss: 0.7495550513267517
Epoch [15/30], Step [230/250], Loss: 0.6538060903549194
Epoch [15/30], Step [240/250], Loss: 0.7314434051513672
Epoch [15/30], Step [250/250], Loss: 0.7135218381881714
Epoch [16/30], Step [10/250], Loss: 0.7383496761322021
Epoch [16/30], Step [20/250], Loss: 0.644036591053009
Epoch [16/30], Step [30/250], Loss: 0.6101108193397522
Epoch [16/30], Step [40/250], Loss: 0.7390760779380798
Epoch [16/30], Step [50/250], Loss: 0.6870918273925781
Epoch [16/30], Step [60/250], Loss: 0.6894906759262085
Epoch [16/30], Step [70/250], Loss: 0.7674188017845154
Epoch [16/30], Step [80/250], Loss: 0.7476275563240051
Epoch [16/30], Step [90/250], Loss: 0.7009009718894958
Epoch [16/30], Step [100/250], Loss: 0.6951045989990234
Epoch [16/30], Step [110/250], Loss: 0.7023512721061707
Epoch [16/30], Step [120/250], Loss: 0.6900476217269897
Epoch [16/30], Step [130/250], Loss: 0.7070642709732056
Epoch [16/30], Step [140/250], Loss: 0.6627304553985596
Epoch [16/30], Step [150/250], Loss: 0.676548182964325
Epoch [16/30], Step [160/250], Loss: 0.7038763761520386
Epoch [16/30], Step [170/250], Loss: 0.6916297078132629
Epoch [16/30], Step [180/250], Loss: 0.7028259634971619
Epoch [16/30], Step [190/250], Loss: 0.6524210572242737
Epoch [16/30], Step [200/250], Loss: 0.7346513867378235
Epoch [16/30], Step [210/250], Loss: 0.612514317035675
Epoch [16/30], Step [220/250], Loss: 0.7455917596817017
Epoch [16/30], Step [230/250], Loss: 0.747292160987854
Epoch [16/30], Step [240/250], Loss: 0.7447240352630615
Epoch [16/30], Step [250/250], Loss: 0.6769564747810364
Epoch [17/30], Step [10/250], Loss: 0.7425077557563782
Epoch [17/30], Step [20/250], Loss: 0.6944329738616943
Epoch [17/30], Step [30/250], Loss: 0.6961978673934937
Epoch [17/30], Step [40/250], Loss: 0.6465986967086792
Epoch [17/30], Step [50/250], Loss: 0.714703381061554
Epoch [17/30], Step [60/250], Loss: 0.5930614471435547
Epoch [17/30], Step [70/250], Loss: 0.6468428373336792
Epoch [17/30], Step [80/250], Loss: 0.686537504196167
Epoch [17/30], Step [90/250], Loss: 0.7371711730957031
Epoch [17/30], Step [100/250], Loss: 0.7700399160385132
Epoch [17/30], Step [110/250], Loss: 0.7529278993606567
Epoch [17/30], Step [120/250], Loss: 0.7036042213439941
Epoch [17/30], Step [130/250], Loss: 0.7871543765068054
Epoch [17/30], Step [140/250], Loss: 0.6956086158752441
Epoch [17/30], Step [150/250], Loss: 0.7426921725273132
Epoch [17/30], Step [160/250], Loss: 0.7222756743431091
Epoch [17/30], Step [170/250], Loss: 0.6826121807098389
Epoch [17/30], Step [180/250], Loss: 0.6970388293266296
Epoch [17/30], Step [190/250], Loss: 0.7087472677230835
Epoch [17/30], Step [200/250], Loss: 0.6320711374282837
Epoch [17/30], Step [210/250], Loss: 0.7280303835868835
Epoch [17/30], Step [220/250], Loss: 0.6934517621994019
Epoch [17/30], Step [230/250], Loss: 0.7071420550346375
Epoch [17/30], Step [240/250], Loss: 0.6856362223625183
Epoch [17/30], Step [250/250], Loss: 0.6945990324020386
Epoch [18/30], Step [10/250], Loss: 0.6465855240821838
Epoch [18/30], Step [20/250], Loss: 0.7086865901947021
Epoch [18/30], Step [30/250], Loss: 0.6256162524223328
Epoch [18/30], Step [40/250], Loss: 0.6532611846923828
Epoch [18/30], Step [50/250], Loss: 0.6484596729278564
Epoch [18/30], Step [60/250], Loss: 0.6955176591873169
Epoch [18/30], Step [70/250], Loss: 0.6615030765533447
Epoch [18/30], Step [80/250], Loss: 0.7038217186927795
Epoch [18/30], Step [90/250], Loss: 0.6943345069885254
Epoch [18/30], Step [100/250], Loss: 0.7004052996635437
Epoch [18/30], Step [110/250], Loss: 0.7458634972572327
Epoch [18/30], Step [120/250], Loss: 0.6851629614830017
Epoch [18/30], Step [130/250], Loss: 0.682853102684021
Epoch [18/30], Step [140/250], Loss: 0.6481672525405884
Epoch [18/30], Step [150/250], Loss: 0.7038549780845642
Epoch [18/30], Step [160/250], Loss: 0.6995554566383362
Epoch [18/30], Step [170/250], Loss: 0.6800370216369629
Epoch [18/30], Step [180/250], Loss: 0.6488386392593384
Epoch [18/30], Step [190/250], Loss: 0.7000787854194641
Epoch [18/30], Step [200/250], Loss: 0.7428950071334839
Epoch [18/30], Step [210/250], Loss: 0.6872988343238831
Epoch [18/30], Step [220/250], Loss: 0.6482336521148682
Epoch [18/30], Step [230/250], Loss: 0.6626957058906555
Epoch [18/30], Step [240/250], Loss: 0.6778802275657654
Epoch [18/30], Step [250/250], Loss: 0.7027387022972107
Epoch [19/30], Step [10/250], Loss: 0.6812503933906555
Epoch [19/30], Step [20/250], Loss: 0.6751934289932251
Epoch [19/30], Step [30/250], Loss: 0.6624279618263245
Epoch [19/30], Step [40/250], Loss: 0.6787773966789246
Epoch [19/30], Step [50/250], Loss: 0.7765601873397827
Epoch [19/30], Step [60/250], Loss: 0.6592363119125366
Epoch [19/30], Step [70/250], Loss: 0.7038179039955139
Epoch [19/30], Step [80/250], Loss: 0.7358537316322327
Epoch [19/30], Step [90/250], Loss: 0.708828330039978
Epoch [19/30], Step [100/250], Loss: 0.7642552852630615
Epoch [19/30], Step [110/250], Loss: 0.7605912089347839
Epoch [19/30], Step [120/250], Loss: 0.6976773738861084
Epoch [19/30], Step [130/250], Loss: 0.6766220331192017
Epoch [19/30], Step [140/250], Loss: 0.7171740531921387
Epoch [19/30], Step [150/250], Loss: 0.6521143913269043
Epoch [19/30], Step [160/250], Loss: 0.6554864645004272
Epoch [19/30], Step [170/250], Loss: 0.6797289848327637
Epoch [19/30], Step [180/250], Loss: 0.6546230316162109
Epoch [19/30], Step [190/250], Loss: 0.6951708197593689
Epoch [19/30], Step [200/250], Loss: 0.7692861557006836
Epoch [19/30], Step [210/250], Loss: 0.6987319588661194
Epoch [19/30], Step [220/250], Loss: 0.7281709909439087
Epoch [19/30], Step [230/250], Loss: 0.6981549263000488
Epoch [19/30], Step [240/250], Loss: 0.6613932847976685
Epoch [19/30], Step [250/250], Loss: 0.6515719890594482
Epoch [20/30], Step [10/250], Loss: 0.683667004108429
Epoch [20/30], Step [20/250], Loss: 0.6330690383911133
Epoch [20/30], Step [30/250], Loss: 0.6992578506469727
Epoch [20/30], Step [40/250], Loss: 0.7081963419914246
Epoch [20/30], Step [50/250], Loss: 0.7147829532623291
Epoch [20/30], Step [60/250], Loss: 0.6547238826751709
Epoch [20/30], Step [70/250], Loss: 0.627391517162323
Epoch [20/30], Step [80/250], Loss: 0.6972628831863403
Epoch [20/30], Step [90/250], Loss: 0.6500757932662964
Epoch [20/30], Step [100/250], Loss: 0.7282431125640869
Epoch [20/30], Step [110/250], Loss: 0.6599644422531128
Epoch [20/30], Step [120/250], Loss: 0.691277265548706
Epoch [20/30], Step [130/250], Loss: 0.6712023019790649
Epoch [20/30], Step [140/250], Loss: 0.6875613927841187
Epoch [20/30], Step [150/250], Loss: 0.6852554082870483
Epoch [20/30], Step [160/250], Loss: 0.7059615850448608
Epoch [20/30], Step [170/250], Loss: 0.7474350333213806
Epoch [20/30], Step [180/250], Loss: 0.6700282096862793
Epoch [20/30], Step [190/250], Loss: 0.7267058491706848
Epoch [20/30], Step [200/250], Loss: 0.6795942783355713
Epoch [20/30], Step [210/250], Loss: 0.7355214953422546
Epoch [20/30], Step [220/250], Loss: 0.7097989320755005
Epoch [20/30], Step [230/250], Loss: 0.6741981506347656
Epoch [20/30], Step [240/250], Loss: 0.7197920680046082
Epoch [20/30], Step [250/250], Loss: 0.6666856408119202
Epoch [21/30], Step [10/250], Loss: 0.6850540637969971
Epoch [21/30], Step [20/250], Loss: 0.6577891111373901
Epoch [21/30], Step [30/250], Loss: 0.7145082354545593
Epoch [21/30], Step [40/250], Loss: 0.6782787442207336
Epoch [21/30], Step [50/250], Loss: 0.7092875242233276
Epoch [21/30], Step [60/250], Loss: 0.6552045941352844
Epoch [21/30], Step [70/250], Loss: 0.665422260761261
Epoch [21/30], Step [80/250], Loss: 0.7131606340408325
Epoch [21/30], Step [90/250], Loss: 0.6851215362548828
Epoch [21/30], Step [100/250], Loss: 0.7093809843063354
Epoch [21/30], Step [110/250], Loss: 0.6839103698730469
Epoch [21/30], Step [120/250], Loss: 0.6863808035850525
Epoch [21/30], Step [130/250], Loss: 0.6923962831497192
Epoch [21/30], Step [140/250], Loss: 0.7143585085868835
Epoch [21/30], Step [150/250], Loss: 0.7165741324424744
Epoch [21/30], Step [160/250], Loss: 0.7011140584945679
Epoch [21/30], Step [170/250], Loss: 0.7145777344703674
Epoch [21/30], Step [180/250], Loss: 0.6781455278396606
Epoch [21/30], Step [190/250], Loss: 0.704175591468811
Epoch [21/30], Step [200/250], Loss: 0.6643280982971191
Epoch [21/30], Step [210/250], Loss: 0.7143128514289856
Epoch [21/30], Step [220/250], Loss: 0.7122169137001038
Epoch [21/30], Step [230/250], Loss: 0.7329443693161011
Epoch [21/30], Step [240/250], Loss: 0.7038950324058533
Epoch [21/30], Step [250/250], Loss: 0.683397114276886
Epoch [22/30], Step [10/250], Loss: 0.6960069537162781
Epoch [22/30], Step [20/250], Loss: 0.6595947742462158
Epoch [22/30], Step [30/250], Loss: 0.7287018895149231
Epoch [22/30], Step [40/250], Loss: 0.7046036720275879
Epoch [22/30], Step [50/250], Loss: 0.7062811255455017
Epoch [22/30], Step [60/250], Loss: 0.7442296743392944
Epoch [22/30], Step [70/250], Loss: 0.6482053399085999
Epoch [22/30], Step [80/250], Loss: 0.722833514213562
Epoch [22/30], Step [90/250], Loss: 0.6747336387634277
Epoch [22/30], Step [100/250], Loss: 0.7139792442321777
Epoch [22/30], Step [110/250], Loss: 0.680081844329834
Epoch [22/30], Step [120/250], Loss: 0.686549186706543
Epoch [22/30], Step [130/250], Loss: 0.6854720115661621
Epoch [22/30], Step [140/250], Loss: 0.6525530815124512
Epoch [22/30], Step [150/250], Loss: 0.6676555871963501
Epoch [22/30], Step [160/250], Loss: 0.7014628052711487
Epoch [22/30], Step [170/250], Loss: 0.7186480760574341
Epoch [22/30], Step [180/250], Loss: 0.6748342514038086
Epoch [22/30], Step [190/250], Loss: 0.7034397125244141
Epoch [22/30], Step [200/250], Loss: 0.6637327075004578
Epoch [22/30], Step [210/250], Loss: 0.6852638125419617
Epoch [22/30], Step [220/250], Loss: 0.6631066203117371
Epoch [22/30], Step [230/250], Loss: 0.7248471975326538
Epoch [22/30], Step [240/250], Loss: 0.7282781004905701
Epoch [22/30], Step [250/250], Loss: 0.678613007068634
Epoch [23/30], Step [10/250], Loss: 0.6844161748886108
Epoch [23/30], Step [20/250], Loss: 0.6881325244903564
Epoch [23/30], Step [30/250], Loss: 0.6631232500076294
Epoch [23/30], Step [40/250], Loss: 0.7202731370925903
Epoch [23/30], Step [50/250], Loss: 0.6977999210357666
Epoch [23/30], Step [60/250], Loss: 0.7103397846221924
Epoch [23/30], Step [70/250], Loss: 0.6726264953613281
Epoch [23/30], Step [80/250], Loss: 0.6642501354217529
Epoch [23/30], Step [90/250], Loss: 0.7357184886932373
Epoch [23/30], Step [100/250], Loss: 0.7160366773605347
Epoch [23/30], Step [110/250], Loss: 0.6603021621704102
Epoch [23/30], Step [120/250], Loss: 0.6760040521621704
Epoch [23/30], Step [130/250], Loss: 0.696141242980957
Epoch [23/30], Step [140/250], Loss: 0.6645365357398987
Epoch [23/30], Step [150/250], Loss: 0.7011918425559998
Epoch [23/30], Step [160/250], Loss: 0.6758050322532654
Epoch [23/30], Step [170/250], Loss: 0.6683043837547302
Epoch [23/30], Step [180/250], Loss: 0.6827936172485352
Epoch [23/30], Step [190/250], Loss: 0.699557900428772
Epoch [23/30], Step [200/250], Loss: 0.6873543858528137
Epoch [23/30], Step [210/250], Loss: 0.6973046064376831
Epoch [23/30], Step [220/250], Loss: 0.6847941279411316
Epoch [23/30], Step [230/250], Loss: 0.686026930809021
Epoch [23/30], Step [240/250], Loss: 0.712138831615448
Epoch [23/30], Step [250/250], Loss: 0.6938803791999817
Epoch [24/30], Step [10/250], Loss: 0.6833834648132324
Epoch [24/30], Step [20/250], Loss: 0.7029370069503784
Epoch [24/30], Step [30/250], Loss: 0.6896952390670776
Epoch [24/30], Step [40/250], Loss: 0.6966062784194946
Epoch [24/30], Step [50/250], Loss: 0.6755800247192383
Epoch [24/30], Step [60/250], Loss: 0.6890952587127686
Epoch [24/30], Step [70/250], Loss: 0.6705589294433594
Epoch [24/30], Step [80/250], Loss: 0.7066176533699036
Epoch [24/30], Step [90/250], Loss: 0.758873701095581
Epoch [24/30], Step [100/250], Loss: 0.699566125869751
Epoch [24/30], Step [110/250], Loss: 0.7008506059646606
Epoch [24/30], Step [120/250], Loss: 0.686880350112915
Epoch [24/30], Step [130/250], Loss: 0.6831185817718506
Epoch [24/30], Step [140/250], Loss: 0.6989403963088989
Epoch [24/30], Step [150/250], Loss: 0.7022895812988281
Epoch [24/30], Step [160/250], Loss: 0.7047298550605774
Epoch [24/30], Step [170/250], Loss: 0.6803637742996216
Epoch [24/30], Step [180/250], Loss: 0.6698098182678223
Epoch [24/30], Step [190/250], Loss: 0.6965357661247253
Epoch [24/30], Step [200/250], Loss: 0.7183314561843872
Epoch [24/30], Step [210/250], Loss: 0.7083855271339417
Epoch [24/30], Step [220/250], Loss: 0.688880205154419
Epoch [24/30], Step [230/250], Loss: 0.6859614253044128
Epoch [24/30], Step [240/250], Loss: 0.6815621852874756
Epoch [24/30], Step [250/250], Loss: 0.7023071050643921
Epoch [25/30], Step [10/250], Loss: 0.6979001760482788
Epoch [25/30], Step [20/250], Loss: 0.6792093515396118
Epoch [25/30], Step [30/250], Loss: 0.7000377178192139
Epoch [25/30], Step [40/250], Loss: 0.6891401410102844
Epoch [25/30], Step [50/250], Loss: 0.6950706839561462
Epoch [25/30], Step [60/250], Loss: 0.6931962966918945
Epoch [25/30], Step [70/250], Loss: 0.6918748021125793
Epoch [25/30], Step [80/250], Loss: 0.7022840976715088
Epoch [25/30], Step [90/250], Loss: 0.7233110666275024
Epoch [25/30], Step [100/250], Loss: 0.6882573366165161
Epoch [25/30], Step [110/250], Loss: 0.6959525346755981
Epoch [25/30], Step [120/250], Loss: 0.6953780651092529
Epoch [25/30], Step [130/250], Loss: 0.7029913067817688
Epoch [25/30], Step [140/250], Loss: 0.7104859948158264
Epoch [25/30], Step [150/250], Loss: 0.6983399391174316
Epoch [25/30], Step [160/250], Loss: 0.6920713186264038
Epoch [25/30], Step [170/250], Loss: 0.7179511189460754
Epoch [25/30], Step [180/250], Loss: 0.6971415281295776
Epoch [25/30], Step [190/250], Loss: 0.7037041783332825
Epoch [25/30], Step [200/250], Loss: 0.6952695846557617
Epoch [25/30], Step [210/250], Loss: 0.7007227540016174
Epoch [25/30], Step [220/250], Loss: 0.686070442199707
Epoch [25/30], Step [230/250], Loss: 0.692324161529541
Epoch [25/30], Step [240/250], Loss: 0.6936407089233398
Epoch [25/30], Step [250/250], Loss: 0.6896817088127136
Epoch [26/30], Step [10/250], Loss: 0.7085744142532349
Epoch [26/30], Step [20/250], Loss: 0.6863793730735779
Epoch [26/30], Step [30/250], Loss: 0.6817866563796997
Epoch [26/30], Step [40/250], Loss: 0.7037662267684937
Epoch [26/30], Step [50/250], Loss: 0.7046667337417603
Epoch [26/30], Step [60/250], Loss: 0.6918007135391235
Epoch [26/30], Step [70/250], Loss: 0.713044285774231
Epoch [26/30], Step [80/250], Loss: 0.6832862496376038
Epoch [26/30], Step [90/250], Loss: 0.667504608631134
Epoch [26/30], Step [100/250], Loss: 0.6760569214820862
Epoch [26/30], Step [110/250], Loss: 0.707482099533081
Epoch [26/30], Step [120/250], Loss: 0.6977518200874329
Epoch [26/30], Step [130/250], Loss: 0.6955530047416687
Epoch [26/30], Step [140/250], Loss: 0.7124805450439453
Epoch [26/30], Step [150/250], Loss: 0.6924611330032349
Epoch [26/30], Step [160/250], Loss: 0.6965060234069824
Epoch [26/30], Step [170/250], Loss: 0.6868378520011902
Epoch [26/30], Step [180/250], Loss: 0.7103825807571411
Epoch [26/30], Step [190/250], Loss: 0.6711806654930115
Epoch [26/30], Step [200/250], Loss: 0.6948347091674805
Epoch [26/30], Step [210/250], Loss: 0.7058894634246826
Epoch [26/30], Step [220/250], Loss: 0.6947336196899414
Epoch [26/30], Step [230/250], Loss: 0.689943253993988
Epoch [26/30], Step [240/250], Loss: 0.6956008672714233
Epoch [26/30], Step [250/250], Loss: 0.6892440319061279
Epoch [27/30], Step [10/250], Loss: 0.6945648193359375
Epoch [27/30], Step [20/250], Loss: 0.697243332862854
Epoch [27/30], Step [30/250], Loss: 0.6995589137077332
Epoch [27/30], Step [40/250], Loss: 0.6961522698402405
Epoch [27/30], Step [50/250], Loss: 0.7141368389129639
Epoch [27/30], Step [60/250], Loss: 0.6883167028427124
Epoch [27/30], Step [70/250], Loss: 0.681597888469696
Epoch [27/30], Step [80/250], Loss: 0.6933290362358093
Epoch [27/30], Step [90/250], Loss: 0.6990853548049927
Epoch [27/30], Step [100/250], Loss: 0.6930828094482422
Epoch [27/30], Step [110/250], Loss: 0.6889819502830505
Epoch [27/30], Step [120/250], Loss: 0.6966762542724609
Epoch [27/30], Step [130/250], Loss: 0.7014245986938477
Epoch [27/30], Step [140/250], Loss: 0.7081984281539917
Epoch [27/30], Step [150/250], Loss: 0.6894259452819824
Epoch [27/30], Step [160/250], Loss: 0.695622444152832
Epoch [27/30], Step [170/250], Loss: 0.6961721181869507
Epoch [27/30], Step [180/250], Loss: 0.6897941827774048
Epoch [27/30], Step [190/250], Loss: 0.6890014410018921
Epoch [27/30], Step [200/250], Loss: 0.6775841116905212
Epoch [27/30], Step [210/250], Loss: 0.6889995336532593
Epoch [27/30], Step [220/250], Loss: 0.6887487769126892
Epoch [27/30], Step [230/250], Loss: 0.6713950037956238
Epoch [27/30], Step [240/250], Loss: 0.6815714836120605
Epoch [27/30], Step [250/250], Loss: 0.6999087333679199
Epoch [28/30], Step [10/250], Loss: 0.7005322575569153
Epoch [28/30], Step [20/250], Loss: 0.6854400634765625
Epoch [28/30], Step [30/250], Loss: 0.7016850113868713
Epoch [28/30], Step [40/250], Loss: 0.6971641182899475
Epoch [28/30], Step [50/250], Loss: 0.6831482648849487
Epoch [28/30], Step [60/250], Loss: 0.6957387924194336
Epoch [28/30], Step [70/250], Loss: 0.6991732716560364
Epoch [28/30], Step [80/250], Loss: 0.6832884550094604
Epoch [28/30], Step [90/250], Loss: 0.6862078309059143
Epoch [28/30], Step [100/250], Loss: 0.7001485824584961
Epoch [28/30], Step [110/250], Loss: 0.686698317527771
Epoch [28/30], Step [120/250], Loss: 0.6935960054397583
Epoch [28/30], Step [130/250], Loss: 0.6797569990158081
Epoch [28/30], Step [140/250], Loss: 0.6913435459136963
Epoch [28/30], Step [150/250], Loss: 0.7099695205688477
Epoch [28/30], Step [160/250], Loss: 0.6739814877510071
Epoch [28/30], Step [170/250], Loss: 0.691004753112793
Epoch [28/30], Step [180/250], Loss: 0.6871265172958374
Epoch [28/30], Step [190/250], Loss: 0.6769859790802002
Epoch [28/30], Step [200/250], Loss: 0.6753854751586914
Epoch [28/30], Step [210/250], Loss: 0.6798712015151978
Epoch [28/30], Step [220/250], Loss: 0.6959697008132935
Epoch [28/30], Step [230/250], Loss: 0.6912880539894104
Epoch [28/30], Step [240/250], Loss: 0.7011526823043823
Epoch [28/30], Step [250/250], Loss: 0.6955965757369995
Epoch [29/30], Step [10/250], Loss: 0.700312077999115
Epoch [29/30], Step [20/250], Loss: 0.688980758190155
Epoch [29/30], Step [30/250], Loss: 0.687660813331604
Epoch [29/30], Step [40/250], Loss: 0.6973135471343994
Epoch [29/30], Step [50/250], Loss: 0.7041200995445251
Epoch [29/30], Step [60/250], Loss: 0.6702690720558167
Epoch [29/30], Step [70/250], Loss: 0.695311427116394
Epoch [29/30], Step [80/250], Loss: 0.7089749574661255
Epoch [29/30], Step [90/250], Loss: 0.6968417763710022
Epoch [29/30], Step [100/250], Loss: 0.6854453086853027
Epoch [29/30], Step [110/250], Loss: 0.6853547096252441
Epoch [29/30], Step [120/250], Loss: 0.6865882277488708
Epoch [29/30], Step [130/250], Loss: 0.6883337497711182
Epoch [29/30], Step [140/250], Loss: 0.705528974533081
Epoch [29/30], Step [150/250], Loss: 0.6866053938865662
Epoch [29/30], Step [160/250], Loss: 0.6900249123573303
Epoch [29/30], Step [170/250], Loss: 0.6984312534332275
Epoch [29/30], Step [180/250], Loss: 0.7001223564147949
Epoch [29/30], Step [190/250], Loss: 0.6993950605392456
Epoch [29/30], Step [200/250], Loss: 0.6955195069313049
Epoch [29/30], Step [210/250], Loss: 0.7174205183982849
Epoch [29/30], Step [220/250], Loss: 0.6770732998847961
Epoch [29/30], Step [230/250], Loss: 0.6760091781616211
Epoch [29/30], Step [240/250], Loss: 0.6769121885299683
Epoch [29/30], Step [250/250], Loss: 0.7050588130950928
Epoch [30/30], Step [10/250], Loss: 0.6745777130126953
Epoch [30/30], Step [20/250], Loss: 0.6881678104400635
Epoch [30/30], Step [30/250], Loss: 0.6794246435165405
Epoch [30/30], Step [40/250], Loss: 0.7122002840042114
Epoch [30/30], Step [50/250], Loss: 0.698681116104126
Epoch [30/30], Step [60/250], Loss: 0.7196323871612549
Epoch [30/30], Step [70/250], Loss: 0.6916103363037109
Epoch [30/30], Step [80/250], Loss: 0.6879148483276367
Epoch [30/30], Step [90/250], Loss: 0.7075177431106567
Epoch [30/30], Step [100/250], Loss: 0.6686447858810425
Epoch [30/30], Step [110/250], Loss: 0.7030155062675476
Epoch [30/30], Step [120/250], Loss: 0.7014066576957703
Epoch [30/30], Step [130/250], Loss: 0.7121413946151733
Epoch [30/30], Step [140/250], Loss: 0.6912719011306763
Epoch [30/30], Step [150/250], Loss: 0.6733638048171997
Epoch [30/30], Step [160/250], Loss: 0.7193289399147034
Epoch [30/30], Step [170/250], Loss: 0.6880522966384888
Epoch [30/30], Step [180/250], Loss: 0.7069193720817566
Epoch [30/30], Step [190/250], Loss: 0.6976951360702515
Epoch [30/30], Step [200/250], Loss: 0.6925494074821472
Epoch [30/30], Step [210/250], Loss: 0.6907849907875061
Epoch [30/30], Step [220/250], Loss: 0.6824172735214233
Epoch [30/30], Step [230/250], Loss: 0.6865588426589966
Epoch [30/30], Step [240/250], Loss: 0.6921617984771729
Epoch [30/30], Step [250/250], Loss: 0.6736024618148804

Training job (trainu90lc57j1vm) succeeded, you can check the logs/metrics/output in  the console:
https://pai.console.aliyun.com/?regionId=cn-hangzhou&workspaceId=58670#/training/jobs/trainu90lc57j1vm

通过训练作业日志的,我们可以看到训练作业加载了之前训练作业的checkpoint,在此基础上,从第11个epoch开始继续训练。

结语#

本文以PyTorch为示例,介绍了如何在PAI的训练作业中使用checkpoint:训练代码可以通过/ml/output/checkpoints/路径保存和加载checkpoints文件,checkpoints文件将被保存到OSS Bucket上。当用户使用其他的训练框架,例如TensorFlowHuggingFace transformersModelScope等,也可以通过类似的方式在PAI的训练作业中使用checkpoint