Quickstart¶
Installation¶
For QLoRA (4-bit quantisation), also install bitsandbytes:
For development:
Python API¶
Minimal Training¶
from easylora import train, TrainConfig
from easylora.config import ModelConfig, DataConfig
config = TrainConfig(
model=ModelConfig(base_model="meta-llama/Llama-3.2-1B"),
data=DataConfig(
dataset_name="tatsu-lab/alpaca",
format="alpaca",
max_seq_len=2048,
),
)
artifacts = train(config)
print(f"Adapter saved to: {artifacts.adapter_dir}")
QLoRA (4-bit)¶
config = TrainConfig(
model=ModelConfig(
base_model="meta-llama/Llama-3.2-1B",
load_in_4bit=True,
),
data=DataConfig(
dataset_name="tatsu-lab/alpaca",
format="alpaca",
),
)
artifacts = train(config)
Autopilot (No Manual Config)¶
from easylora import autopilot_plan, autopilot_train
plan = autopilot_plan(
model="meta-llama/Llama-3.2-1B",
dataset="tatsu-lab/alpaca",
quality="balanced",
)
for line in plan.to_pretty_lines():
print(line)
artifacts = autopilot_train(
model="meta-llama/Llama-3.2-1B",
dataset="tatsu-lab/alpaca",
)
Using the Trainer Directly¶
from easylora import EasyLoRATrainer, TrainConfig
from easylora.config import ModelConfig, DataConfig, OutputConfig
config = TrainConfig(
model=ModelConfig(base_model="meta-llama/Llama-3.2-1B"),
data=DataConfig(dataset_path="my_data.jsonl", format="raw"),
output=OutputConfig(output_dir="./my_run"),
)
trainer = EasyLoRATrainer(config)
artifacts = trainer.fit()
results = trainer.evaluate()
trainer.merge_and_save("./merged_model")
CLI¶
Train with a Config File¶
Train with Autopilot¶
easylora train \
--autopilot \
--model meta-llama/Llama-3.2-1B \
--dataset tatsu-lab/alpaca \
--quality balanced
Dry-run an Autopilot Plan¶
easylora autopilot plan \
--model meta-llama/Llama-3.2-1B \
--dataset tatsu-lab/alpaca \
--print-config
Generate a Starter Config¶
Override Config Values¶
Validate Without Training¶
Check Your Environment¶
Next Steps¶
- See the configuration reference for all options
- See the CLI reference for all commands
- See adapters for save/load/merge workflows