API reference: 🤗 Transformers integration#
You can log model training metadata by passing a Neptune callback to the Trainer callbacks.
NeptuneCallback()#
Creates a Neptune callback that you pass to the callbacks argument of the Trainer constructor.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
api_token |
str, optional |
None |
Neptune API token obtained upon registration. You can leave this argument out if you have saved your token to the NEPTUNE_API_TOKEN environment variable (strongly recommended). |
project |
str, optional |
None |
Name of an existing Neptune project, in the form: "workspace-name/project-name". In Neptune, you can copy the name from the project settings (
) →
Details & privacy.If None, the value of the If you just want to try logging anonymously, you can use the public project "common/huggingface-integration". |
name |
str, optional |
None |
Custom name for the run. |
base_namespace |
str, optional |
"finetuning" |
In the Neptune run, the root namespace (folder) that will contain all of the logged metadata. |
run |
Run, optional |
None |
Pass a Neptune run object if you want to continue logging to an existing run. |
log_parameters |
bool, optional |
True |
Log all Trainer arguments and model parameters provided by the Trainer. |
log_checkpoints |
str, optional |
None |
|
**neptune_run_kwargs |
(optional) | - | Additional keyword arguments to be passed directly to the init_run() function when a new run is created. |
Example
from transformers.integrations import NeptuneCallback
# Create Neptune callback
neptune_callback = NeptuneCallback(
name="timid-rhino",
description="DistilBERT fine-tuned on GLUE/MRPC",
tags=["args-callback", "fine-tune", "MRPC"], # tags help you manage runs
base_namespace="custom_name", # the default is "finetuning"
log_checkpoints="best", # other options are "last", "same", and None
capture_hardware_metrics=False, # additional kwargs for a Neptune run
)
# Create training arguments
training_args = TrainingArguments(
"quick-training-distilbert-mrpc",
evaluation_strategy="steps",
eval_steps=20,
report_to="none", # (1)!
)
# Pass Neptune callback to Trainer
trainer = Trainer(
model,
training_args,
callbacks=[neptune_callback],
)
trainer.train()
- To avoid creating several callbacks, set the
report_toargument to"none".
get_run()#
Returns the current Neptune run.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
trainer |
transformers.Trainer |
- | The transformers Trainer instance. |
Returns
Run object that can be used for logging.
Examples
trainer.train()
# Logging additional metadata after training
run = NeptuneCallback.get_run(trainer)
run["predictions"] = ...
See also Logging additional metadata after training in the Neptune-Transformers integration guide.
.run#
If you have a NeptuneCallback instance available, you can access the Neptune run with the .run property:
trainer.train()
# Logging additional metadata after training
neptune_callback.run["predictions"] = ...
See also
NeptuneCallback in the 🤗 Transformers API reference