注意
跳转到结尾 下载完整的示例代码。
使用 Ray Tune 进行超参数调整#
创建日期:2020 年 8 月 31 日 | 最后更新:2026 年 1 月 8 日 | 最后验证:2024 年 11 月 5 日
作者: Ricardo Decal
本教程展示了如何将 Ray Tune 集成到您的 PyTorch 训练流程中,以执行可扩展且高效的超参数调整。
如何修改 PyTorch 训练循环以用于 Ray Tune
如何将超参数扫描扩展到多个节点和 GPU,而无需更改代码
如何定义超参数搜索空间并使用
tune.Tuner运行扫描如何使用提前停止调度器 (ASHA) 并报告指标/检查点
如何使用检查点恢复训练并加载最佳模型
PyTorch v2.9+ 和
torchvisionRay Tune (
ray[tune]) v2.52.1+GPU(s) 是可选的,但建议使用以加快训练速度
Ray 是 PyTorch 基金会的一个项目,是一个用于扩展 AI 和 Python 应用程序的开源统一框架。它通过处理分布式计算的复杂性来帮助运行分布式作业。 Ray Tune 是构建在 Ray 之上的一个库,用于超参数调整,使您可以将超参数扫描从您的机器扩展到大型集群,而无需更改代码。
本教程改编了 PyTorch 训练 CIFAR10 分类器的教程,以使用 Ray Tune 运行多 GPU 超参数扫描。
设置#
要运行本教程,请安装以下依赖项
pip install "ray[tune]" torchvision
然后从导入开始
from functools import partial
import os
import tempfile
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import random_split
import torchvision
import torchvision.transforms as transforms
# New: imports for Ray Tune
import ray
from ray import tune
from ray.tune import Checkpoint
from ray.tune.schedulers import ASHAScheduler
数据加载#
将数据加载器包装在一个构造函数中。在本教程中,将全局数据目录传递给该函数,以便在不同的试验中重用数据集。在集群环境中,可以使用共享存储,例如网络文件系统,以防止每个节点单独下载数据。
def load_data(data_dir="./data"):
# Mean and standard deviation of the CIFAR10 training subset.
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.4914, 0.48216, 0.44653), (0.2022, 0.19932, 0.20086))]
)
trainset = torchvision.datasets.CIFAR10(
root=data_dir, train=True, download=True, transform=transform
)
testset = torchvision.datasets.CIFAR10(
root=data_dir, train=False, download=True, transform=transform
)
return trainset, testset
模型架构#
本教程搜索全连接层的最佳大小和学习率。为了实现这一点,Net 类将层大小 l1 和 l2 作为 Ray Tune 可以搜索的配置参数公开
class Net(nn.Module):
def __init__(self, l1=120, l2=84):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, l1)
self.fc2 = nn.Linear(l1, l2)
self.fc3 = nn.Linear(l2, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
定义搜索空间#
接下来,定义要调整的超参数以及 Ray Tune 采样它们的方式。Ray Tune 提供各种 搜索空间分布 以适应不同的参数类型:loguniform、uniform、choice、randint、grid 等。您还可以使用 条件搜索空间 或从任意函数中采样来表达参数之间的复杂依赖关系。
这是本教程的搜索空间
config = {
"l1": tune.choice([2**i for i in range(9)]),
"l2": tune.choice([2**i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16]),
}
tune.choice() 接受一个均匀采样的值列表。在本例中,l1 和 l2 参数值是 1 到 256 之间的 2 的幂,并且学习率在对数刻度上采样,范围在 0.0001 到 0.1 之间。在对数刻度上采样可以在相对尺度上探索一系列大小,而不是绝对尺度。
训练函数#
Ray Tune 需要一个接受配置字典并运行主训练循环的训练函数。当 Ray Tune 运行不同的试验时,它会为每个试验更新配置字典。
这是完整的训练函数,后跟 Ray Tune 集成点的说明
def train_cifar(config, data_dir=None):
net = Net(config["l1"], config["l2"])
device = config["device"]
net = net.to(device)
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)
# Load checkpoint if resuming training
checkpoint = tune.get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
checkpoint_path = Path(checkpoint_dir) / "checkpoint.pt"
checkpoint_state = torch.load(checkpoint_path)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
start_epoch = 0
trainset, _testset = load_data(data_dir)
test_abs = int(len(trainset) * 0.8)
train_subset, val_subset = random_split(
trainset, [test_abs, len(trainset) - test_abs]
)
trainloader = torch.utils.data.DataLoader(
train_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
valloader = torch.utils.data.DataLoader(
val_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
for epoch in range(start_epoch, 10): # loop over the dataset multiple times
running_loss = 0.0
epoch_steps = 0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
epoch_steps += 1
if i % 2000 == 1999: # print every 2000 mini-batches
print(
"[%d, %5d] loss: %.3f"
% (epoch + 1, i + 1, running_loss / epoch_steps)
)
running_loss = 0.0
# Validation loss
val_loss = 0.0
val_steps = 0
total = 0
correct = 0
for i, data in enumerate(valloader, 0):
with torch.no_grad():
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
loss = criterion(outputs, labels)
val_loss += loss.cpu().numpy()
val_steps += 1
# Save checkpoint and report metrics
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
checkpoint_path = Path(checkpoint_dir) / "checkpoint.pt"
torch.save(checkpoint_data, checkpoint_path)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
tune.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
print("Finished Training")
关键集成点#
使用配置字典中的超参数#
Ray Tune 使用超参数更新每个试验的 config 字典。在本例中,模型架构和优化器从 config 字典接收超参数
报告指标和保存检查点#
最重要的集成是与 Ray Tune 通信。Ray Tune 使用验证指标来确定最佳超参数配置并提前停止表现不佳的试验,从而节省资源。
检查点使您以后可以加载训练的模型,恢复超参数搜索,并提供容错能力。它也是一些 Ray Tune 调度器(例如 基于种群的训练)暂停并在搜索期间恢复试验所必需的。
此代码从训练函数在存在检查点时加载模型和优化器状态
checkpoint = tune.get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
checkpoint_path = Path(checkpoint_dir) / "checkpoint.pt"
checkpoint_state = torch.load(checkpoint_path)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
在每个 epoch 结束时,保存检查点并报告验证指标
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
checkpoint_path = Path(checkpoint_dir) / "checkpoint.pt"
torch.save(checkpoint_data, checkpoint_path)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
tune.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
Ray Tune 检查点支持本地文件系统、云存储和分布式文件系统。有关更多信息,请参阅 Ray Tune 存储文档。
多 GPU 支持#
可以使用 GPU 大大加速图像分类模型。训练函数通过将模型包装在 nn.DataParallel 中来支持多 GPU 训练
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
此训练函数支持在 CPU、单个 GPU、多个 GPU 或多个节点上进行训练,而无需更改代码。Ray Tune 会根据可用资源自动将试验分配到节点上。Ray Tune 还支持 分数 GPU,以便在模型、优化器和数据批次适合 GPU 内存的情况下,一个 GPU 可以由多个试验共享。
验证拆分#
原始 CIFAR10 数据集只有训练和测试子集。这足以训练单个模型,但是对于超参数调整,需要一个验证子集。训练函数通过保留训练子集的 20% 来创建一个验证子集。测试子集用于评估最佳模型在搜索完成后的一般化误差。
评估函数#
找到最佳超参数后,在保留的测试集上测试模型以估计泛化误差
def test_accuracy(net, device="cpu", data_dir=None):
_trainset, testset = load_data(data_dir)
testloader = torch.utils.data.DataLoader(
testset, batch_size=4, shuffle=False, num_workers=2
)
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
image_batch, labels = data
image_batch, labels = image_batch.to(device), labels.to(device)
outputs = net(image_batch)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
return correct / total
配置并运行 Ray Tune#
定义训练和评估函数后,配置 Ray Tune 以运行超参数搜索。
用于提前停止的调度器#
Ray Tune 提供调度器来提高超参数搜索的效率,方法是检测表现不佳的试验并提前停止它们。 ASHAScheduler 使用异步连续减半算法 (ASHA) 来积极终止表现不佳的试验
scheduler = ASHAScheduler(
max_t=max_num_epochs,
grace_period=1,
reduction_factor=2,
)
Ray Tune 还提供 高级搜索算法,以根据以前的结果智能地选择下一组超参数,而不依赖于随机或网格搜索。示例包括 Optuna 和 BayesOpt。
资源分配#
通过将 resources 字典传递给 tune.with_resources,告诉 Ray Tune 为每个试验分配哪些资源
tune.with_resources(
partial(train_cifar, data_dir=data_dir),
resources={"cpu": cpus_per_trial, "gpu": gpus_per_trial}
)
Ray Tune 会自动管理这些试验的放置,并确保试验在隔离状态下运行,因此您无需手动将 GPU 分配给进程。
例如,如果您在 20 台机器的集群上运行此实验,每台机器有 8 个 GPU,您可以设置 gpus_per_trial = 0.5 以安排每个 GPU 两个并发试验。此配置在整个集群中并行运行 320 个试验。
注意
要在没有 GPU 的情况下运行本教程,请设置 gpus_per_trial=0 并预计运行时间会显著延长。
为了避免在开发期间运行时间过长,请从少量试验和 epoch 开始。
创建 Tuner#
Ray Tune API 具有模块化和可组合性。将您的配置传递给 tune.Tuner 类以创建 tuner 对象,然后运行 tuner.fit() 以开始训练
tuner = tune.Tuner(
tune.with_resources(
partial(train_cifar, data_dir=data_dir),
resources={"cpu": cpus_per_trial, "gpu": gpus_per_trial}
),
tune_config=tune.TuneConfig(
metric="loss",
mode="min",
scheduler=scheduler,
num_samples=num_trials,
),
param_space=config,
)
results = tuner.fit()
训练完成后,检索表现最佳的试验,加载其检查点,并在测试集上进行评估。
整合所有内容#
def main(num_trials=10, max_num_epochs=10, gpus_per_trial=0, cpus_per_trial=2):
print("Starting hyperparameter tuning.")
ray.init(include_dashboard=False)
data_dir = os.path.abspath("./data")
load_data(data_dir) # Pre-download the dataset
device = "cuda" if torch.cuda.is_available() else "cpu"
config = {
"l1": tune.choice([2**i for i in range(9)]),
"l2": tune.choice([2**i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16]),
"device": device,
}
scheduler = ASHAScheduler(
max_t=max_num_epochs,
grace_period=1,
reduction_factor=2,
)
tuner = tune.Tuner(
tune.with_resources(
partial(train_cifar, data_dir=data_dir),
resources={"cpu": cpus_per_trial, "gpu": gpus_per_trial}
),
tune_config=tune.TuneConfig(
metric="loss",
mode="min",
scheduler=scheduler,
num_samples=num_trials,
),
param_space=config,
)
results = tuner.fit()
best_result = results.get_best_result("loss", "min")
print(f"Best trial config: {best_result.config}")
print(f"Best trial final validation loss: {best_result.metrics['loss']}")
print(f"Best trial final validation accuracy: {best_result.metrics['accuracy']}")
best_trained_model = Net(best_result.config["l1"], best_result.config["l2"])
best_trained_model = best_trained_model.to(device)
if gpus_per_trial > 1:
best_trained_model = nn.DataParallel(best_trained_model)
best_checkpoint = best_result.checkpoint
with best_checkpoint.as_directory() as checkpoint_dir:
checkpoint_path = Path(checkpoint_dir) / "checkpoint.pt"
best_checkpoint_data = torch.load(checkpoint_path)
best_trained_model.load_state_dict(best_checkpoint_data["net_state_dict"])
test_acc = test_accuracy(best_trained_model, device, data_dir)
print(f"Best trial test set accuracy: {test_acc}")
if __name__ == "__main__":
# Set the number of trials, epochs, and GPUs per trial here:
main(num_trials=10, max_num_epochs=10, gpus_per_trial=1)
Starting hyperparameter tuning.
2026-03-25 18:57:00,904 WARNING services.py:2137 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 2147471360 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=10.24gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
2026-03-25 18:57:01,067 INFO worker.py:2023 -- Started a local Ray instance.
/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py:2062: FutureWarning: Tip: In future versions of Ray, Ray will no longer override accelerator visible devices env var if num_gpus=0 or num_gpus=None (default). To enable this behavior and turn off this error message, set RAY_ACCEL_ENV_VAR_OVERRIDE_ON_ZERO=0
warnings.warn(
0%| | 0.00/170M [00:00<?, ?B/s]
1%| | 885k/170M [00:00<00:19, 8.83MB/s]
5%|▍ | 8.36M/170M [00:00<00:03, 47.4MB/s]
11%|█ | 18.5M/170M [00:00<00:02, 72.1MB/s]
15%|█▌ | 26.4M/170M [00:00<00:01, 74.8MB/s]
21%|██ | 35.6M/170M [00:00<00:01, 80.7MB/s]
26%|██▌ | 43.9M/170M [00:00<00:01, 81.8MB/s]
31%|███ | 53.1M/170M [00:00<00:01, 84.8MB/s]
36%|███▌ | 61.5M/170M [00:00<00:01, 83.7MB/s]
41%|████ | 69.9M/170M [00:00<00:01, 83.0MB/s]
46%|████▌ | 78.6M/170M [00:01<00:01, 84.2MB/s]
51%|█████ | 87.1M/170M [00:01<00:01, 82.5MB/s]
56%|█████▌ | 95.7M/170M [00:01<00:00, 83.7MB/s]
61%|██████ | 104M/170M [00:01<00:00, 82.7MB/s]
66%|██████▌ | 112M/170M [00:01<00:00, 78.2MB/s]
71%|███████ | 120M/170M [00:01<00:00, 71.5MB/s]
75%|███████▌ | 128M/170M [00:01<00:00, 73.5MB/s]
80%|███████▉ | 136M/170M [00:01<00:00, 73.6MB/s]
84%|████████▍ | 143M/170M [00:01<00:00, 74.2MB/s]
88%|████████▊ | 151M/170M [00:01<00:00, 73.2MB/s]
93%|█████████▎| 158M/170M [00:02<00:00, 70.1MB/s]
97%|█████████▋| 165M/170M [00:02<00:00, 63.1MB/s]
100%|██████████| 170M/170M [00:02<00:00, 71.7MB/s]
╭────────────────────────────────────────────────────────────────────╮
│ Configuration for experiment train_cifar_2026-03-25_18-57-07 │
├────────────────────────────────────────────────────────────────────┤
│ Search algorithm BasicVariantGenerator │
│ Scheduler AsyncHyperBandScheduler │
│ Number of trials 10 │
╰────────────────────────────────────────────────────────────────────╯
View detailed results here: /var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07
To visualize your results with TensorBoard, run: `tensorboard --logdir /tmp/ray/session_2026-03-25_18-56-59_248838_4336/artifacts/2026-03-25_18-57-07/train_cifar_2026-03-25_18-57-07/driver_artifacts`
Trial status: 10 PENDING
Current time: 2026-03-25 18:57:07. Total running time: 0s
Logical resource usage: 0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:A10G)
╭───────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size │
├───────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 PENDING 1 32 0.0735008 8 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰───────────────────────────────────────────────────────────────────────────────╯
Trial train_cifar_6bc62_00000 started with configuration:
╭─────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00000 config │
├─────────────────────────────────────────────────┤
│ batch_size 8 │
│ device cuda │
│ l1 1 │
│ l2 32 │
│ lr 0.0735 │
╰─────────────────────────────────────────────────╯
(func pid=5527) [1, 2000] loss: 2.325
(func pid=5527) [1, 4000] loss: 1.163
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000000)
(func pid=5527) [2, 2000] loss: 2.322
Trial status: 1 RUNNING | 9 PENDING
Current time: 2026-03-25 18:57:37. Total running time: 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.3177831254959105 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 RUNNING 1 32 0.0735008 8 1 18.4183 2.31778 0.0978 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=5527) [2, 4000] loss: 1.162
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000001)
(func pid=5527) [3, 2000] loss: 2.323
(func pid=5527) [3, 4000] loss: 1.162
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000002)
(func pid=5527) [4, 2000] loss: 2.325
Trial status: 1 RUNNING | 9 PENDING
Current time: 2026-03-25 18:58:07. Total running time: 1min 0s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.316563980102539 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 RUNNING 1 32 0.0735008 8 3 50.7944 2.31656 0.1028 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=5527) [4, 4000] loss: 1.161
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000003)
(func pid=5527) [5, 2000] loss: 2.322
(func pid=5527) [5, 4000] loss: 1.161
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000004)
Trial status: 1 RUNNING | 9 PENDING
Current time: 2026-03-25 18:58:37. Total running time: 1min 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.3217419649124147 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 RUNNING 1 32 0.0735008 8 5 83.3768 2.32174 0.0978 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=5527) [6, 2000] loss: 2.322
(func pid=5527) [6, 4000] loss: 1.161
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000005)
(func pid=5527) [7, 2000] loss: 2.324
(func pid=5527) [7, 4000] loss: 1.163
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000006)
Trial status: 1 RUNNING | 9 PENDING
Current time: 2026-03-25 18:59:07. Total running time: 2min 0s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.323030565071106 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 RUNNING 1 32 0.0735008 8 7 115.812 2.32303 0.1049 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=5527) [8, 2000] loss: 2.324
(func pid=5527) [8, 4000] loss: 1.163
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000007)
(func pid=5527) [9, 2000] loss: 2.324
(func pid=5527) [9, 4000] loss: 1.162
Trial status: 1 RUNNING | 9 PENDING
Current time: 2026-03-25 18:59:38. Total running time: 2min 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.314357503128052 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 RUNNING 1 32 0.0735008 8 8 132.278 2.31436 0.1028 │
│ train_cifar_6bc62_00001 PENDING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000008)
(func pid=5527) [10, 2000] loss: 2.323
(func pid=5527) [10, 4000] loss: 1.161
Trial train_cifar_6bc62_00000 completed after 10 iterations at 2026-03-25 18:59:55. Total running time: 2min 48s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00000 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000009 │
│ time_this_iter_s 16.00276 │
│ time_total_s 164.44886 │
│ training_iteration 10 │
│ accuracy 0.1023 │
│ loss 2.31295 │
╰────────────────────────────────────────────────────────────╯
(func pid=5527) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00000_0_batch_size=8,l1=1,l2=32,lr=0.0735_2026-03-25_18-57-07/checkpoint_000009)
Trial train_cifar_6bc62_00001 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00001 config │
├──────────────────────────────────────────────────┤
│ batch_size 2 │
│ device cuda │
│ l1 64 │
│ l2 32 │
│ lr 0.00017 │
╰──────────────────────────────────────────────────╯
(func pid=6565) [1, 2000] loss: 2.304
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:00:08. Total running time: 3min 0s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.3129542194366457 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [1, 4000] loss: 1.131
(func pid=6565) [1, 6000] loss: 0.712
(func pid=6565) [1, 8000] loss: 0.503
(func pid=6565) [1, 10000] loss: 0.374
(func pid=6565) [1, 12000] loss: 0.299
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:00:38. Total running time: 3min 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00000 with loss=2.3129542194366457 and params={'l1': 1, 'l2': 32, 'lr': 0.07350076446627962, 'batch_size': 8, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [1, 14000] loss: 0.246
(func pid=6565) [1, 16000] loss: 0.211
(func pid=6565) [1, 18000] loss: 0.183
(func pid=6565) [1, 20000] loss: 0.164
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000000)
(func pid=6565) [2, 2000] loss: 1.584
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:01:08. Total running time: 4min 0s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.557044204121828 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 1 62.2594 1.55704 0.4319 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [2, 4000] loss: 0.768
(func pid=6565) [2, 6000] loss: 0.511
(func pid=6565) [2, 8000] loss: 0.377
(func pid=6565) [2, 10000] loss: 0.294
(func pid=6565) [2, 12000] loss: 0.248
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:01:38. Total running time: 4min 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.557044204121828 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 1 62.2594 1.55704 0.4319 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [2, 14000] loss: 0.208
(func pid=6565) [2, 16000] loss: 0.176
(func pid=6565) [2, 18000] loss: 0.155
(func pid=6565) [2, 20000] loss: 0.143
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000001)
(func pid=6565) [3, 2000] loss: 1.379
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:02:08. Total running time: 5min 0s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.3941712468542158 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 2 122.198 1.39417 0.499 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [3, 4000] loss: 0.679
(func pid=6565) [3, 6000] loss: 0.451
(func pid=6565) [3, 8000] loss: 0.340
(func pid=6565) [3, 10000] loss: 0.269
(func pid=6565) [3, 12000] loss: 0.221
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:02:38. Total running time: 5min 30s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.3941712468542158 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 2 122.198 1.39417 0.499 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [3, 14000] loss: 0.189
(func pid=6565) [3, 16000] loss: 0.163
(func pid=6565) [3, 18000] loss: 0.145
(func pid=6565) [3, 20000] loss: 0.132
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000002)
(func pid=6565) [4, 2000] loss: 1.271
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:03:08. Total running time: 6min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.317358442920819 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 3 182.235 1.31736 0.5275 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [4, 4000] loss: 0.617
(func pid=6565) [4, 6000] loss: 0.422
(func pid=6565) [4, 8000] loss: 0.308
(func pid=6565) [4, 10000] loss: 0.249
(func pid=6565) [4, 12000] loss: 0.208
(func pid=6565) [4, 14000] loss: 0.178
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:03:38. Total running time: 6min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.317358442920819 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 3 182.235 1.31736 0.5275 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [4, 16000] loss: 0.152
(func pid=6565) [4, 18000] loss: 0.140
(func pid=6565) [4, 20000] loss: 0.124
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000003)
(func pid=6565) [5, 2000] loss: 1.172
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:04:08. Total running time: 7min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.2637821143429726 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 4 242.125 1.26378 0.5544 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [5, 4000] loss: 0.606
(func pid=6565) [5, 6000] loss: 0.394
(func pid=6565) [5, 8000] loss: 0.303
(func pid=6565) [5, 10000] loss: 0.235
(func pid=6565) [5, 12000] loss: 0.201
(func pid=6565) [5, 14000] loss: 0.166
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:04:38. Total running time: 7min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.2637821143429726 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 4 242.125 1.26378 0.5544 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [5, 16000] loss: 0.145
(func pid=6565) [5, 18000] loss: 0.130
(func pid=6565) [5, 20000] loss: 0.116
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000004)
(func pid=6565) [6, 2000] loss: 1.149
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:05:08. Total running time: 8min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1834670189762488 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 5 302.26 1.18347 0.5839 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [6, 4000] loss: 0.560
(func pid=6565) [6, 6000] loss: 0.373
(func pid=6565) [6, 8000] loss: 0.284
(func pid=6565) [6, 10000] loss: 0.226
(func pid=6565) [6, 12000] loss: 0.184
(func pid=6565) [6, 14000] loss: 0.159
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:05:38. Total running time: 8min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1834670189762488 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 5 302.26 1.18347 0.5839 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [6, 16000] loss: 0.141
(func pid=6565) [6, 18000] loss: 0.125
(func pid=6565) [6, 20000] loss: 0.111
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000005)
(func pid=6565) [7, 2000] loss: 1.047
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:06:08. Total running time: 9min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.2165090228671207 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 6 361.975 1.21651 0.5777 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [7, 4000] loss: 0.543
(func pid=6565) [7, 6000] loss: 0.352
(func pid=6565) [7, 8000] loss: 0.274
(func pid=6565) [7, 10000] loss: 0.212
(func pid=6565) [7, 12000] loss: 0.180
(func pid=6565) [7, 14000] loss: 0.159
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:06:38. Total running time: 9min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.2165090228671207 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 6 361.975 1.21651 0.5777 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [7, 16000] loss: 0.135
(func pid=6565) [7, 18000] loss: 0.120
(func pid=6565) [7, 20000] loss: 0.108
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000006)
(func pid=6565) [8, 2000] loss: 1.015
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:07:09. Total running time: 10min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1480290637508965 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 7 422.197 1.14803 0.5941 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [8, 4000] loss: 0.521
(func pid=6565) [8, 6000] loss: 0.347
(func pid=6565) [8, 8000] loss: 0.262
(func pid=6565) [8, 10000] loss: 0.208
(func pid=6565) [8, 12000] loss: 0.169
(func pid=6565) [8, 14000] loss: 0.150
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:07:39. Total running time: 10min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1480290637508965 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 7 422.197 1.14803 0.5941 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [8, 16000] loss: 0.126
(func pid=6565) [8, 18000] loss: 0.118
(func pid=6565) [8, 20000] loss: 0.108
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000007)
(func pid=6565) [9, 2000] loss: 0.973
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:08:09. Total running time: 11min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1974248337265103 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 8 482.447 1.19742 0.5805 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [9, 4000] loss: 0.496
(func pid=6565) [9, 6000] loss: 0.334
(func pid=6565) [9, 8000] loss: 0.252
(func pid=6565) [9, 10000] loss: 0.197
(func pid=6565) [9, 12000] loss: 0.171
(func pid=6565) [9, 14000] loss: 0.147
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:08:39. Total running time: 11min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1974248337265103 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 8 482.447 1.19742 0.5805 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [9, 16000] loss: 0.124
(func pid=6565) [9, 18000] loss: 0.115
(func pid=6565) [9, 20000] loss: 0.104
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000008)
(func pid=6565) [10, 2000] loss: 0.953
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:09:09. Total running time: 12min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.15451886014794 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 9 542.649 1.15452 0.6035 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [10, 4000] loss: 0.478
(func pid=6565) [10, 6000] loss: 0.319
(func pid=6565) [10, 8000] loss: 0.247
(func pid=6565) [10, 10000] loss: 0.200
(func pid=6565) [10, 12000] loss: 0.160
(func pid=6565) [10, 14000] loss: 0.138
Trial status: 1 TERMINATED | 1 RUNNING | 8 PENDING
Current time: 2026-03-25 19:09:39. Total running time: 12min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.15451886014794 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00001 RUNNING 64 32 0.00017124 2 9 542.649 1.15452 0.6035 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00002 PENDING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=6565) [10, 16000] loss: 0.123
(func pid=6565) [10, 18000] loss: 0.112
(func pid=6565) [10, 20000] loss: 0.099
Trial train_cifar_6bc62_00001 completed after 10 iterations at 2026-03-25 19:10:02. Total running time: 12min 55s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00001 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000009 │
│ time_this_iter_s 59.89805 │
│ time_total_s 602.54687 │
│ training_iteration 10 │
│ accuracy 0.6158 │
│ loss 1.12661 │
╰────────────────────────────────────────────────────────────╯
(func pid=6565) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00001_1_batch_size=2,l1=64,l2=32,lr=0.0002_2026-03-25_18-57-07/checkpoint_000009)
Trial train_cifar_6bc62_00002 started with configuration:
╭─────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00002 config │
├─────────────────────────────────────────────────┤
│ batch_size 2 │
│ device cuda │
│ l1 2 │
│ l2 4 │
│ lr 0.0084 │
╰─────────────────────────────────────────────────╯
Trial status: 2 TERMINATED | 1 RUNNING | 7 PENDING
Current time: 2026-03-25 19:10:09. Total running time: 13min 1s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00002 RUNNING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=8420) [1, 2000] loss: 2.314
(func pid=8420) [1, 4000] loss: 1.155
(func pid=8420) [1, 6000] loss: 0.770
(func pid=8420) [1, 8000] loss: 0.578
(func pid=8420) [1, 10000] loss: 0.463
Trial status: 2 TERMINATED | 1 RUNNING | 7 PENDING
Current time: 2026-03-25 19:10:39. Total running time: 13min 31s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00002 RUNNING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=8420) [1, 12000] loss: 0.385
(func pid=8420) [1, 14000] loss: 0.330
(func pid=8420) [1, 16000] loss: 0.289
(func pid=8420) [1, 18000] loss: 0.257
(func pid=8420) [1, 20000] loss: 0.231
Trial status: 2 TERMINATED | 1 RUNNING | 7 PENDING
Current time: 2026-03-25 19:11:09. Total running time: 14min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00002 RUNNING 2 4 0.00840049 2 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00003 PENDING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Trial train_cifar_6bc62_00002 completed after 1 iterations at 2026-03-25 19:11:09. Total running time: 14min 2s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00002 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000000 │
│ time_this_iter_s 62.43839 │
│ time_total_s 62.43839 │
│ training_iteration 1 │
│ accuracy 0.1002 │
│ loss 2.30887 │
╰────────────────────────────────────────────────────────────╯
(func pid=8420) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00002_2_batch_size=2,l1=2,l2=4,lr=0.0084_2026-03-25_18-57-07/checkpoint_000000)
Trial train_cifar_6bc62_00003 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00003 config │
├──────────────────────────────────────────────────┤
│ batch_size 4 │
│ device cuda │
│ l1 256 │
│ l2 32 │
│ lr 0.07748 │
╰──────────────────────────────────────────────────╯
(func pid=8677) [1, 2000] loss: 2.348
(func pid=8677) [1, 4000] loss: 1.173
(func pid=8677) [1, 6000] loss: 0.782
(func pid=8677) [1, 8000] loss: 0.587
Trial status: 3 TERMINATED | 1 RUNNING | 6 PENDING
Current time: 2026-03-25 19:11:39. Total running time: 14min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00003 RUNNING 256 32 0.0774765 4 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00004 PENDING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=8677) [1, 10000] loss: 0.468
Trial train_cifar_6bc62_00003 completed after 1 iterations at 2026-03-25 19:11:47. Total running time: 14min 39s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00003 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000000 │
│ time_this_iter_s 33.26451 │
│ time_total_s 33.26451 │
│ training_iteration 1 │
│ accuracy 0.097 │
│ loss 2.36125 │
╰────────────────────────────────────────────────────────────╯
(func pid=8677) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00003_3_batch_size=4,l1=256,l2=32,lr=0.0775_2026-03-25_18-57-07/checkpoint_000000)
Trial train_cifar_6bc62_00004 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00004 config │
├──────────────────────────────────────────────────┤
│ batch_size 4 │
│ device cuda │
│ l1 8 │
│ l2 256 │
│ lr 0.01712 │
╰──────────────────────────────────────────────────╯
(func pid=8883) [1, 2000] loss: 2.320
(func pid=8883) [1, 4000] loss: 1.156
(func pid=8883) [1, 6000] loss: 0.771
Trial status: 4 TERMINATED | 1 RUNNING | 5 PENDING
Current time: 2026-03-25 19:12:09. Total running time: 15min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00004 RUNNING 8 256 0.0171199 4 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=8883) [1, 8000] loss: 0.578
(func pid=8883) [1, 10000] loss: 0.463
(func pid=8883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00004_4_batch_size=4,l1=8,l2=256,lr=0.0171_2026-03-25_18-57-07/checkpoint_000000)
(func pid=8883) [2, 2000] loss: 2.313
(func pid=8883) [2, 4000] loss: 1.156
Trial status: 4 TERMINATED | 1 RUNNING | 5 PENDING
Current time: 2026-03-25 19:12:39. Total running time: 15min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00004 RUNNING 8 256 0.0171199 4 1 32.8909 2.30617 0.0994 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00005 PENDING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=8883) [2, 6000] loss: 0.771
(func pid=8883) [2, 8000] loss: 0.578
(func pid=8883) [2, 10000] loss: 0.462
Trial train_cifar_6bc62_00004 completed after 2 iterations at 2026-03-25 19:12:55. Total running time: 15min 47s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00004 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000001 │
│ time_this_iter_s 31.01967 │
│ time_total_s 63.91055 │
│ training_iteration 2 │
│ accuracy 0.0998 │
│ loss 2.30634 │
╰────────────────────────────────────────────────────────────╯
(func pid=8883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00004_4_batch_size=4,l1=8,l2=256,lr=0.0171_2026-03-25_18-57-07/checkpoint_000001)
Trial train_cifar_6bc62_00005 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00005 config │
├──────────────────────────────────────────────────┤
│ batch_size 4 │
│ device cuda │
│ l1 256 │
│ l2 4 │
│ lr 0.00109 │
╰──────────────────────────────────────────────────╯
(func pid=9212) [1, 2000] loss: 2.117
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:13:09. Total running time: 16min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [1, 4000] loss: 0.905
(func pid=9212) [1, 6000] loss: 0.570
(func pid=9212) [1, 8000] loss: 0.400
(func pid=9212) [1, 10000] loss: 0.310
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000000)
(func pid=9212) [2, 2000] loss: 1.489
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:13:39. Total running time: 16min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 1 33.2916 1.61464 0.4113 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [2, 4000] loss: 0.737
(func pid=9212) [2, 6000] loss: 0.488
(func pid=9212) [2, 8000] loss: 0.371
(func pid=9212) [2, 10000] loss: 0.286
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000001)
(func pid=9212) [3, 2000] loss: 1.357
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:14:09. Total running time: 17min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 2 64.1244 1.5456 0.4637 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [3, 4000] loss: 0.694
(func pid=9212) [3, 6000] loss: 0.447
(func pid=9212) [3, 8000] loss: 0.343
(func pid=9212) [3, 10000] loss: 0.271
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000002)
(func pid=9212) [4, 2000] loss: 1.276
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:14:39. Total running time: 17min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 3 95.1057 1.36196 0.5173 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [4, 4000] loss: 0.629
(func pid=9212) [4, 6000] loss: 0.433
(func pid=9212) [4, 8000] loss: 0.324
(func pid=9212) [4, 10000] loss: 0.256
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000003)
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:15:09. Total running time: 18min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 4 126.271 1.33344 0.5363 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [5, 2000] loss: 1.203
(func pid=9212) [5, 4000] loss: 0.607
(func pid=9212) [5, 6000] loss: 0.406
(func pid=9212) [5, 8000] loss: 0.301
(func pid=9212) [5, 10000] loss: 0.250
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000004)
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:15:39. Total running time: 18min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 5 157.282 1.5114 0.5007 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [6, 2000] loss: 1.136
(func pid=9212) [6, 4000] loss: 0.572
(func pid=9212) [6, 6000] loss: 0.380
(func pid=9212) [6, 8000] loss: 0.295
(func pid=9212) [6, 10000] loss: 0.237
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000005)
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:16:10. Total running time: 19min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 6 188.326 1.48039 0.4975 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [7, 2000] loss: 1.049
(func pid=9212) [7, 4000] loss: 0.551
(func pid=9212) [7, 6000] loss: 0.378
(func pid=9212) [7, 8000] loss: 0.280
(func pid=9212) [7, 10000] loss: 0.228
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000006)
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:16:40. Total running time: 19min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 7 219.308 1.37583 0.553 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [8, 2000] loss: 1.023
(func pid=9212) [8, 4000] loss: 0.529
(func pid=9212) [8, 6000] loss: 0.353
(func pid=9212) [8, 8000] loss: 0.267
(func pid=9212) [8, 10000] loss: 0.223
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000007)
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:17:10. Total running time: 20min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 8 250.475 1.45449 0.5236 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) [9, 2000] loss: 0.941
(func pid=9212) [9, 4000] loss: 0.504
(func pid=9212) [9, 6000] loss: 0.337
(func pid=9212) [9, 8000] loss: 0.269
(func pid=9212) [9, 10000] loss: 0.210
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:17:40. Total running time: 20min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 8 250.475 1.45449 0.5236 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000008)
(func pid=9212) [10, 2000] loss: 0.929
(func pid=9212) [10, 4000] loss: 0.489
(func pid=9212) [10, 6000] loss: 0.325
(func pid=9212) [10, 8000] loss: 0.256
(func pid=9212) [10, 10000] loss: 0.205
Trial status: 5 TERMINATED | 1 RUNNING | 4 PENDING
Current time: 2026-03-25 19:18:10. Total running time: 21min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00005 RUNNING 256 4 0.00109075 4 9 281.708 1.39641 0.554 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00006 PENDING 2 32 0.00150454 16 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Trial train_cifar_6bc62_00005 completed after 10 iterations at 2026-03-25 19:18:12. Total running time: 21min 4s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00005 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000009 │
│ time_this_iter_s 31.18515 │
│ time_total_s 312.89338 │
│ training_iteration 10 │
│ accuracy 0.5559 │
│ loss 1.37663 │
╰────────────────────────────────────────────────────────────╯
(func pid=9212) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00005_5_batch_size=4,l1=256,l2=4,lr=0.0011_2026-03-25_18-57-07/checkpoint_000009)
Trial train_cifar_6bc62_00006 started with configuration:
╭─────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00006 config │
├─────────────────────────────────────────────────┤
│ batch_size 16 │
│ device cuda │
│ l1 2 │
│ l2 32 │
│ lr 0.0015 │
╰─────────────────────────────────────────────────╯
(func pid=10509) [1, 2000] loss: 2.024
(func pid=10509) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00006_6_batch_size=16,l1=2,l2=32,lr=0.0015_2026-03-25_18-57-07/checkpoint_000000)
(func pid=10509) [2, 2000] loss: 1.726
(func pid=10509) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00006_6_batch_size=16,l1=2,l2=32,lr=0.0015_2026-03-25_18-57-07/checkpoint_000001)
Trial status: 6 TERMINATED | 1 RUNNING | 3 PENDING
Current time: 2026-03-25 19:18:40. Total running time: 21min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00006 RUNNING 2 32 0.00150454 16 2 19.7064 1.78648 0.2937 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00007 PENDING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=10509) [3, 2000] loss: 1.669
(func pid=10509) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00006_6_batch_size=16,l1=2,l2=32,lr=0.0015_2026-03-25_18-57-07/checkpoint_000002)
(func pid=10509) [4, 2000] loss: 1.624
Trial train_cifar_6bc62_00006 completed after 4 iterations at 2026-03-25 19:18:53. Total running time: 21min 45s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00006 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000003 │
│ time_this_iter_s 8.60706 │
│ time_total_s 37.06046 │
│ training_iteration 4 │
│ accuracy 0.3722 │
│ loss 1.5953 │
╰────────────────────────────────────────────────────────────╯
(func pid=10509) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00006_6_batch_size=16,l1=2,l2=32,lr=0.0015_2026-03-25_18-57-07/checkpoint_000003)
Trial train_cifar_6bc62_00007 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00007 config │
├──────────────────────────────────────────────────┤
│ batch_size 8 │
│ device cuda │
│ l1 1 │
│ l2 256 │
│ lr 0.00191 │
╰──────────────────────────────────────────────────╯
(func pid=10914) [1, 2000] loss: 2.040
Trial status: 7 TERMINATED | 1 RUNNING | 2 PENDING
Current time: 2026-03-25 19:19:10. Total running time: 22min 2s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00007 RUNNING 1 256 0.00190733 8 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00008 PENDING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=10914) [1, 4000] loss: 0.986
(func pid=10914) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00007_7_batch_size=8,l1=1,l2=256,lr=0.0019_2026-03-25_18-57-07/checkpoint_000000)
(func pid=10914) [2, 2000] loss: 1.928
(func pid=10914) [2, 4000] loss: 0.963
Trial train_cifar_6bc62_00007 completed after 2 iterations at 2026-03-25 19:19:32. Total running time: 22min 24s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00007 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000001 │
│ time_this_iter_s 16.46403 │
│ time_total_s 35.0792 │
│ training_iteration 2 │
│ accuracy 0.2161 │
│ loss 1.88703 │
╰────────────────────────────────────────────────────────────╯
(func pid=10914) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00007_7_batch_size=8,l1=1,l2=256,lr=0.0019_2026-03-25_18-57-07/checkpoint_000001)
Trial train_cifar_6bc62_00008 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00008 config │
├──────────────────────────────────────────────────┤
│ batch_size 4 │
│ device cuda │
│ l1 4 │
│ l2 4 │
│ lr 0.00376 │
╰──────────────────────────────────────────────────╯
Trial status: 8 TERMINATED | 1 RUNNING | 1 PENDING
Current time: 2026-03-25 19:19:40. Total running time: 22min 32s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00008 RUNNING 4 4 0.00375999 4 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00007 TERMINATED 1 256 0.00190733 8 2 35.0792 1.88703 0.2161 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=11179) [1, 2000] loss: 2.229
(func pid=11179) [1, 4000] loss: 1.003
(func pid=11179) [1, 6000] loss: 0.656
(func pid=11179) [1, 8000] loss: 0.485
(func pid=11179) [1, 10000] loss: 0.390
(func pid=11179) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00008_8_batch_size=4,l1=4,l2=4,lr=0.0038_2026-03-25_18-57-07/checkpoint_000000)
Trial status: 8 TERMINATED | 1 RUNNING | 1 PENDING
Current time: 2026-03-25 19:20:10. Total running time: 23min 3s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00008 RUNNING 4 4 0.00375999 4 1 33.0962 1.8936 0.2388 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00007 TERMINATED 1 256 0.00190733 8 2 35.0792 1.88703 0.2161 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=11179) [2, 2000] loss: 1.957
(func pid=11179) [2, 4000] loss: 0.989
(func pid=11179) [2, 6000] loss: 0.649
(func pid=11179) [2, 8000] loss: 0.533
(func pid=11179) [2, 10000] loss: 0.456
Trial train_cifar_6bc62_00008 completed after 2 iterations at 2026-03-25 19:20:40. Total running time: 23min 32s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00008 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000001 │
│ time_this_iter_s 30.92234 │
│ time_total_s 64.01855 │
│ training_iteration 2 │
│ accuracy 0.1372 │
│ loss 2.26816 │
╰────────────────────────────────────────────────────────────╯
(func pid=11179) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00008_8_batch_size=4,l1=4,l2=4,lr=0.0038_2026-03-25_18-57-07/checkpoint_000001)
Trial status: 9 TERMINATED | 1 PENDING
Current time: 2026-03-25 19:20:40. Total running time: 23min 33s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00007 TERMINATED 1 256 0.00190733 8 2 35.0792 1.88703 0.2161 │
│ train_cifar_6bc62_00008 TERMINATED 4 4 0.00375999 4 2 64.0186 2.26816 0.1372 │
│ train_cifar_6bc62_00009 PENDING 128 1 0.000460627 4 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Trial train_cifar_6bc62_00009 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00009 config │
├──────────────────────────────────────────────────┤
│ batch_size 4 │
│ device cuda │
│ l1 128 │
│ l2 1 │
│ lr 0.00046 │
╰──────────────────────────────────────────────────╯
(func pid=11507) [1, 2000] loss: 2.403
(func pid=11507) [1, 4000] loss: 1.159
(func pid=11507) [1, 6000] loss: 0.769
(func pid=11507) [1, 8000] loss: 0.576
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2026-03-25 19:21:10. Total running time: 24min 3s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00009 RUNNING 128 1 0.000460627 4 │
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00007 TERMINATED 1 256 0.00190733 8 2 35.0792 1.88703 0.2161 │
│ train_cifar_6bc62_00008 TERMINATED 4 4 0.00375999 4 2 64.0186 2.26816 0.1372 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=11507) [1, 10000] loss: 0.461
Trial train_cifar_6bc62_00009 completed after 1 iterations at 2026-03-25 19:21:18. Total running time: 24min 10s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_6bc62_00009 result │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name checkpoint_000000 │
│ time_this_iter_s 33.76283 │
│ time_total_s 33.76283 │
│ training_iteration 1 │
│ accuracy 0.0985 │
│ loss 2.30284 │
╰────────────────────────────────────────────────────────────╯
2026-03-25 19:21:18,092 INFO tune.py:1009 -- Wrote the latest version of all result files and experiment state to '/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07' in 0.0101s.
Trial status: 10 TERMINATED
Current time: 2026-03-25 19:21:18. Total running time: 24min 10s
Logical resource usage: 2.0/16 CPUs, 1.0/1 GPUs (0.0/1.0 accelerator_type:A10G)
Current best trial: 6bc62_00001 with loss=1.1266060190973353 and params={'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_6bc62_00000 TERMINATED 1 32 0.0735008 8 10 164.449 2.31295 0.1023 │
│ train_cifar_6bc62_00001 TERMINATED 64 32 0.00017124 2 10 602.547 1.12661 0.6158 │
│ train_cifar_6bc62_00002 TERMINATED 2 4 0.00840049 2 1 62.4384 2.30887 0.1002 │
│ train_cifar_6bc62_00003 TERMINATED 256 32 0.0774765 4 1 33.2645 2.36125 0.097 │
│ train_cifar_6bc62_00004 TERMINATED 8 256 0.0171199 4 2 63.9106 2.30634 0.0998 │
│ train_cifar_6bc62_00005 TERMINATED 256 4 0.00109075 4 10 312.893 1.37663 0.5559 │
│ train_cifar_6bc62_00006 TERMINATED 2 32 0.00150454 16 4 37.0605 1.5953 0.3722 │
│ train_cifar_6bc62_00007 TERMINATED 1 256 0.00190733 8 2 35.0792 1.88703 0.2161 │
│ train_cifar_6bc62_00008 TERMINATED 4 4 0.00375999 4 2 64.0186 2.26816 0.1372 │
│ train_cifar_6bc62_00009 TERMINATED 128 1 0.000460627 4 1 33.7628 2.30284 0.0985 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Best trial config: {'l1': 64, 'l2': 32, 'lr': 0.00017124045435791454, 'batch_size': 2, 'device': 'cuda'}
Best trial final validation loss: 1.1266060190973353
Best trial final validation accuracy: 0.6158
(func pid=11507) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2026-03-25_18-57-07/train_cifar_6bc62_00009_9_batch_size=4,l1=128,l2=1,lr=0.0005_2026-03-25_18-57-07/checkpoint_000000)
Best trial test set accuracy: 0.617
结果#
您的 Ray Tune 试验摘要输出如下所示。文本表格总结了试验的验证性能,并突出了最佳超参数配置
Number of trials: 10/10 (10 TERMINATED)
+-----+--------------+------+------+-------------+--------+---------+------------+
| ... | batch_size | l1 | l2 | lr | iter | loss | accuracy |
|-----+--------------+------+------+-------------+--------+---------+------------|
| ... | 2 | 1 | 256 | 0.000668163 | 1 | 2.31479 | 0.0977 |
| ... | 4 | 64 | 8 | 0.0331514 | 1 | 2.31605 | 0.0983 |
| ... | 4 | 2 | 1 | 0.000150295 | 1 | 2.30755 | 0.1023 |
| ... | 16 | 32 | 32 | 0.0128248 | 10 | 1.66912 | 0.4391 |
| ... | 4 | 8 | 128 | 0.00464561 | 2 | 1.7316 | 0.3463 |
| ... | 8 | 256 | 8 | 0.00031556 | 1 | 2.19409 | 0.1736 |
| ... | 4 | 16 | 256 | 0.00574329 | 2 | 1.85679 | 0.3368 |
| ... | 8 | 2 | 2 | 0.00325652 | 1 | 2.30272 | 0.0984 |
| ... | 2 | 2 | 2 | 0.000342987 | 2 | 1.76044 | 0.292 |
| ... | 4 | 64 | 32 | 0.003734 | 8 | 1.53101 | 0.4761 |
+-----+--------------+------+------+-------------+--------+---------+------------+
Best trial config: {'l1': 64, 'l2': 32, 'lr': 0.0037339984519545164, 'batch_size': 4}
Best trial final validation loss: 1.5310075663924216
Best trial final validation accuracy: 0.4761
Best trial test set accuracy: 0.4737
大多数试验过早停止以节省资源。表现最佳的试验获得了大约 47% 的验证准确率,测试集也证实了这一点。
可观测性#
在运行大规模实验时,监控至关重要。Ray 提供了一个 仪表板,让您可以查看试验状态、检查集群资源使用情况以及实时检查日志。
为了进行调试,Ray 还提供 分布式调试工具,让您可以将调试器附加到集群中的运行试验。
结论#
在本教程中,您学习了如何使用 Ray Tune 调整 PyTorch 模型的超参数。您了解了如何将 Ray Tune 集成到您的 PyTorch 训练循环中,为您的超参数定义搜索空间,使用高效的调度器(如 ASHAScheduler)提前终止表现不佳的试验,保存检查点并将指标报告给 Ray Tune,以及运行超参数搜索并分析结果。
Ray Tune 使您可以轻松地将实验从单台机器扩展到大型集群,帮助您高效地找到最佳模型配置。
延伸阅读#
脚本总运行时间: (24 分钟 25.326 秒)