Model Support¶
Auto Target Module Detection¶
When target_modules is set to "auto" (the default), easylora automatically
selects the appropriate LoRA target modules based on the model architecture.
Supported Architectures¶
| Architecture | Model Type | Target Modules |
|---|---|---|
| LLaMA, LLaMA 2/3, Code Llama, Vicuna | llama |
q_proj, v_proj |
| Mistral, Mixtral | mistral, mixtral |
q_proj, v_proj |
| Qwen2 | qwen2 |
q_proj, v_proj |
| Qwen (v1) | qwen |
c_attn |
| Gemma, Gemma 2 | gemma, gemma2 |
q_proj, v_proj |
| Phi-2 | phi |
q_proj, v_proj |
| Phi-3 | phi3 |
qkv_proj |
| OPT | opt |
q_proj, v_proj |
| GPT-NeoX, Pythia | gpt_neox |
query_key_value |
| Falcon | falcon |
query_key_value |
| Bloom | bloom |
query_key_value |
| MPT | mpt |
Wqkv |
| GPT-2 | gpt2 |
c_attn |
| StarCoder, GPT-BigCode | gpt_bigcode |
c_attn |
How Detection Works¶
- Registry lookup: checks the model's architecture class name against a
known mapping in
easylora/lora/targets_registry.yaml. - Model type lookup: checks
model.config.model_typeagainst a secondary mapping for broader compatibility. - Linear scan fallback: if neither lookup matches, scans the model for
nn.Linearlayers and selects attention-like module names (preferring patterns likeq_proj,k_proj,v_proj,query,key,value). - Last resort: falls back to
["q_proj", "v_proj"]with a warning.
Inspecting Targets¶
Use the CLI to see what targets would be selected for a model:
Overriding Targets¶
To use specific modules instead of auto-detection:
Adding New Architectures¶
To add support for a new model architecture:
- Add entries to
easylora/lora/targets_registry.yaml. - Add tests in
tests/test_targets.py. - Submit a pull request.
QLoRA Support¶
QLoRA requires the bitsandbytes library:
Enable via config:
Note
bitsandbytes currently requires a CUDA GPU. It does not work on CPU-only machines or Apple Silicon.