Stand: 6. April 2026
Version: 1.0.0
Kategorie: Exporters
The JSONL LLM Exporter exports ThemisDB BaseEntity data as weighted training samples in JSONL format for fine-tuning Large Language Models with LoRA (Low-Rank Adaptation) and QLoRA (Quantized LoRA).
✅ Multiple LLM Formats
- Instruction Tuning (
{"instruction": ..., "input": ..., "output": ...}) - Chat Completion (
{"messages": [{"role": ..., "content": ...}]}) - Text Completion (
{"text": ...}) - Named Format Templates: Alpaca, ShareGPT, ChatML, OpenAI Fine-Tuning
✅ Weighted Training Samples
- Explicit weight field (e.g.,
importance: 0.8) - Auto-weighting by text length
- Auto-weighting by data freshness
- Custom weighting strategies
✅ Quality Filtering
- Min/max text length constraints
- Empty output detection
- Duplicate detection
- Configurable quality thresholds
✅ Metadata Enrichment
- Source tracking
- Category/tag preservation
- Custom metadata fields
# Load via PluginManager
auto& pm = PluginManager::instance();
pm.scanPluginDirectory("./plugins");
auto* plugin = pm.loadPlugin("jsonl_llm_exporter");
auto* exporter = static_cast<IExporter*>(plugin->getInstance());#include "exporters/jsonl_llm_exporter.h"
JSONLLLMConfig config;
config.style = JSONLFormat::Style::INSTRUCTION_TUNING;
config.weighting.enable_weights = true;
config.weighting.auto_weight_by_length = true;
JSONLLLMExporter exporter(config);Best for question-answering, task completion:
JSONLLLMConfig config;
config.style = JSONLFormat::Style::INSTRUCTION_TUNING;
config.field_mapping.instruction_field = "question";
config.field_mapping.input_field = "context";
config.field_mapping.output_field = "answer";BaseEntity Example:
{
"pk": "qa_001",
"question": "What is the capital of France?",
"context": "France is a country in Western Europe",
"answer": "Paris is the capital of France.",
"importance": 0.9
}JSONL Output:
{"instruction": "What is the capital of France?", "input": "France is a country in Western Europe", "output": "Paris is the capital of France.", "weight": 0.9}Best for conversational AI:
JSONLLLMConfig config;
config.style = JSONLFormat::Style::CHAT_COMPLETION;
config.field_mapping.system_field = "system_prompt";
config.field_mapping.user_field = "user_message";
config.field_mapping.assistant_field = "assistant_response";BaseEntity Example:
{
"pk": "chat_001",
"system_prompt": "You are a helpful assistant.",
"user_message": "Explain quantum computing",
"assistant_response": "Quantum computing uses quantum bits...",
"importance": 1.2
}JSONL Output:
{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain quantum computing"}, {"role": "assistant", "content": "Quantum computing uses quantum bits..."}], "weight": 1.2}Best for text generation, next-word prediction:
JSONLLLMConfig config;
config.style = JSONLFormat::Style::TEXT_COMPLETION;
config.field_mapping.text_field = "content";In addition to the generic style-based formats, the exporter supports named format templates that emit documents in the exact schema required by popular fine-tuning toolchains. When a format_template_type is set it overrides the style field.
Produces {"instruction":…, "input":…, "output":…}. The input key is always emitted (empty string when the input field is absent) to match the original Alpaca specification.
#include "exporters/jsonl_llm_exporter.h"
#include "exporters/format_template.h"
JSONLLLMConfig config;
config.format_template_type = FormatTemplateType::ALPACA;
// Override field names if your collection uses different names:
config.template_field_mapping.instruction_field = "question"; // default
config.template_field_mapping.input_field = "context"; // default
config.template_field_mapping.output_field = "answer"; // default
JSONLLLMExporter exporter(config);BaseEntity Example:
{
"pk": "qa_001",
"question": "Translate 'Hello' to Spanish.",
"context": "",
"answer": "Hola"
}JSONL Output:
{"instruction": "Translate 'Hello' to Spanish.", "input": "", "output": "Hola"}Produces {"conversations":[{"from":"human","value":…},{"from":"gpt","value":…}]}. An optional system turn is prepended when the system_field is present and non-empty.
JSONLLLMConfig config;
config.format_template_type = FormatTemplateType::SHAREGPT;
config.template_field_mapping.system_field = "system_prompt"; // default
config.template_field_mapping.user_field = "user_message"; // default
config.template_field_mapping.assistant_field = "assistant_response"; // default
JSONLLLMExporter exporter(config);BaseEntity Example:
{
"pk": "conv_001",
"system_prompt": "You are a helpful assistant.",
"user_message": "What is the capital of France?",
"assistant_response": "Paris."
}JSONL Output:
{"conversations": [{"from": "system", "value": "You are a helpful assistant."}, {"from": "human", "value": "What is the capital of France?"}, {"from": "gpt", "value": "Paris."}]}Produces {"messages":[{"role":"system","content":…},{"role":"user","content":…},{"role":"assistant","content":…}]}. The system message is omitted when the system_field is absent or empty.
JSONLLLMConfig config;
config.format_template_type = FormatTemplateType::CHATML;
config.template_field_mapping.system_field = "system_prompt"; // default
config.template_field_mapping.user_field = "user_message"; // default
config.template_field_mapping.assistant_field = "assistant_response"; // default
JSONLLLMExporter exporter(config);JSONL Output (with system):
{"messages": [{"role": "system", "content": "Always respond in French."}, {"role": "user", "content": "How are you?"}, {"role": "assistant", "content": "Je vais bien, merci."}]}Structurally identical to ChatML and directly compatible with the OpenAI fine-tuning API.
JSONLLLMConfig config;
config.format_template_type = FormatTemplateType::OPENAI_FINETUNING;
JSONLLLMExporter exporter(config);Before running a full export you can perform a collection-level dry-run check to verify that all required fields are present in every entity. The validateTemplate() function iterates a representative sample, collects every missing field name across all entities, deduplicates the list, and returns a sorted result — making it reliable for automated comparisons in CI.
#include "exporters/format_template.h"
// --- Option A: free function (no exporter instance needed) ---
std::vector<themis::BaseEntity> sample = /* load representative entities */;
auto result = themis::exporters::validateTemplate(
FormatTemplateType::ALPACA,
FormatTemplateFieldMapping{}, // use defaults, or fill in overrides
sample
);
if (!result.valid) {
std::cerr << result.entities_failed << " / " << result.entities_checked
<< " entities have missing fields:\n";
for (const auto& f : result.missing_fields) {
std::cerr << " missing: " << f << '\n';
}
// return EXIT_FAILURE or throw, as appropriate for your pipeline
}
// --- Option B: via JSONLLLMExporter (uses active config mapping) ---
themis::exporters::JSONLLLMConfig cfg;
cfg.format_template_type = FormatTemplateType::CHATML;
cfg.template_field_mapping.user_field = "prompt";
cfg.template_field_mapping.assistant_field = "response";
themis::exporters::JSONLLLMExporter exporter(cfg);
auto r2 = exporter.validateTemplate(sample);
if (!r2.valid) { /* ... */ }TemplateValidationResult fields:
| Field | Type | Description |
|---|---|---|
valid |
bool |
true when every entity satisfies all required fields |
missing_fields |
vector<string> |
Sorted, deduplicated list of absent field names |
entities_checked |
size_t |
Total entities examined |
entities_failed |
size_t |
Entities with at least one missing field |
# Pipe a JSONL collection sample via stdin and validate against the Alpaca template:
cat sample.jsonl | themisdb-export \
--collection @my_collection \
--validate-template alpaca
# With custom field-name overrides:
cat sample.jsonl | themisdb-export \
--collection @my_collection \
--validate-template alpaca \
--template-instruction prompt \
--template-output completion
# Exit codes:
# 0 All required fields present
# 1 One or more required fields missing (field names printed to stderr)
# 3 Unknown template name or missing --collectionThe --validate-template flag skips the actual export entirely — no output file is written.
JSONLLLMExporter exporter(cfg);
// Switch to ChatML without recreating the exporter object:
JSONLLLMConfig cfg2;
cfg2.format_template_type = FormatTemplateType::CHATML;
exporter.setConfig(cfg2);config.weighting.enable_weights = true;
config.weighting.weight_field = "importance"; // Field in BaseEntity
config.weighting.default_weight = 1.0; // If field missingUse Case: Domain experts manually assign importance scores.
config.weighting.auto_weight_by_length = true;Formula: weight *= (1.0 + min(0.5, length / 2000.0))
Use Case: Longer, more detailed responses get higher weights (up to 1.5x).
config.weighting.auto_weight_by_freshness = true;
config.weighting.timestamp_field = "created_at";Use Case: Newer data is more valuable (recent trends, updated information).
config.weighting.enable_weights = true;
config.weighting.auto_weight_by_length = true;
config.weighting.auto_weight_by_freshness = true;Weights are multiplied: final_weight = explicit_weight × length_factor × freshness_factor
config.quality.min_text_length = 50; // Skip very short responses
config.quality.max_text_length = 8192; // Skip excessively long responsesconfig.quality.skip_empty_outputs = true; // Skip if output field is emptyconfig.quality.skip_duplicates = true; // Hash-based duplicate removalconfig.include_metadata = true;
config.metadata_fields = {"source", "category", "tags", "author"};Output with metadata:
{"instruction": "...", "output": "...", "weight": 1.0, "metadata": {"source": "wikipedia", "category": "science", "tags": ["physics", "quantum"]}}// Load entities from ThemisDB
std::vector<BaseEntity> faqs = db.query("category=faq");
// Configure exporter
JSONLLLMConfig config;
config.style = JSONLFormat::Style::INSTRUCTION_TUNING;
config.field_mapping.instruction_field = "question";
config.field_mapping.output_field = "answer";
config.weighting.enable_weights = true;
config.weighting.weight_field = "upvotes"; // Use upvotes as weights
JSONLLLMExporter exporter(config);
// Export
ExportOptions options;
options.output_path = "training_data/faq_lora.jsonl";
options.progress_callback = [](const ExportStats& stats) {
std::cout << "Exported: " << stats.exported_entities << " entities\n";
};
auto stats = exporter.exportEntities(faqs, options);
std::cout << stats.toJson() << std::endl;// Load chat conversations
std::vector<BaseEntity> chats = db.query("type=conversation AND rating>4");
// Configure for chat format
JSONLLLMConfig config;
config.style = JSONLFormat::Style::CHAT_COMPLETION;
config.field_mapping.user_field = "user_query";
config.field_mapping.assistant_field = "bot_response";
config.weighting.auto_weight_by_length = true; // Detailed responses weighted higher
config.quality.min_text_length = 100; // Skip short exchanges
JSONLLLMExporter exporter(config);
// Export for QLoRA training
ExportOptions options;
options.output_path = "training_data/chat_qlora.jsonl";
auto stats = exporter.exportEntities(chats, options);// Load recent knowledge articles
std::vector<BaseEntity> articles = db.query("type=article");
// Prioritize recent content
JSONLLLMConfig config;
config.style = JSONLFormat::Style::TEXT_COMPLETION;
config.field_mapping.text_field = "full_text";
config.weighting.auto_weight_by_freshness = true;
config.weighting.timestamp_field = "published_date";
config.include_metadata = true;
config.metadata_fields = {"author", "topic", "published_date"};
JSONLLLMExporter exporter(config);
ExportOptions options;
options.output_path = "training_data/kb_weighted.jsonl";
auto stats = exporter.exportEntities(articles, options);from datasets import load_dataset
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
# Load exported JSONL
dataset = load_dataset("json", data_files="faq_lora.jsonl")
# Configure LoRA
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
# Load base model
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b")
model = get_peft_model(model, lora_config)
# Use weights from JSONL
def compute_loss(model, inputs, weights):
outputs = model(**inputs)
loss = outputs.loss
return (loss * weights).mean() # Weight by importance
# Train
trainer = Trainer(model=model, args=training_args, train_dataset=dataset)
trainer.train()from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch
# 4-bit quantization for QLoRA
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load quantized model
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b",
quantization_config=bnb_config,
device_map="auto"
)
# Apply LoRA on quantized model
from peft import prepare_model_for_kbit_training, LoraConfig
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, lora_config)
# Train with weighted samples from JSONL
# (Same as above){
"total_entities": 10000,
"exported_entities": 9500,
"failed_entities": 500,
"bytes_written": 15728640,
"duration_ms": 2300,
"errors": [
"Entity qa_123: Missing required field 'output'",
"Entity qa_456: Text too short (5 chars)"
]
}- No streaming: Entire entity set loaded in memory
- Single file output: No sharding for very large datasets
- Fixed field mappings: Custom transformations require code changes
- Streaming export for large datasets
- Automatic dataset sharding
- Data augmentation (paraphrasing, back-translation)
- Multi-turn conversation support
- Token counting for optimal batch sizes
- Integration with HuggingFace Hub