Shakti-250M Small Language Model

A domain-specific language model optimized for finance, healthcare, and legal applications

Introduction

Shakti-250M is a Small Language Model (SLM) designed to deliver efficient and targeted performance across domain-specific applications in finance, healthcare, and legal services. With 250 million parameters, it offers a balanced trade-off between computational efficiency and task-oriented language capabilities, making it suitable for deployment on resource-constrained environments such as mobile devices, IoT systems, and edge computing platforms. Built on the Shakti-2.5B framework and optimized for smaller devices, Shakti-250M is ideal for enterprises needing accurate domain-specific language capabilities without heavy compute requirements.

Model Capabilities

Domain Expertise

Fine-tuned on specialized datasets for accurate financial forecasting, medical question answering, and legal summarization.

Edge Efficiency

Designed for smartphones, tablets, and IoT devices with real-time inference capabilities.

Conversational Fluency

Handles multi-turn dialogues with context-aware follow-ups.

Architecture

Shakti-250M includes 250 million parameters and 16 layers, offering a solid balance between performance and efficiency. It uses a model dimension of 1024 and a 4096 FFN (Feed Forward Network) dimension for handling complex language tasks.

Key Architectural Features

Block Sparse Attention: Reduces memory and computation load during long-context processing while preserving accuracy.
Rotary Positional Embeddings (RoPE): Provides effective token position awareness without fixed sinusoidal patterns.
Sliding Window Attention Cache: Enables real-time streaming capabilities.
Pre-Normalization and SiLU Activation: Ensures numerical stability and gradient flow during deep model training.
LayerNorm and Dropout: Used throughout the stack for improved generalization.

Dataset Details

Shakti-250M is trained on a structured combination of general-purpose and domain-specific datasets to ensure both broad language coverage and specialized knowledge in finance, healthcare, and legal domains. Using domain-specific datasets is crucial because general language models often struggle with terminology, structure, and contextual nuances specific to professional fields.

Pre-Training

Builds foundational language understanding and introduces domain-relevant patterns.

General and Financial:

  • Custom Dataset
  • AIR-Bench/qa finance en

Legal:

  • Vidhaan/LegalCitationWorthiness

Supervised Fine-Tuning (SFT)

Improves performance on instruction-following and structured domain tasks.

Healthcare:

  • lavita/medical-qa-datasets
  • ruslanmv/ai-medical-chatbot
  • axiong/pmc llama instructions

Finance:

  • winddude/reddit finance 43 250k
  • MarinaC/question-answer-Subject-Finance-Instruct

Legal:

  • umarbutler/open-australian-legal-qa
  • mb7419/legal-advice-reddit

Direct Preference Optimization (DPO)

Aligns model outputs with preferred user responses in domain-specific tasks.

Finance and Legal:

  • NickyNicky/nano_finance 200k_en_es_chatML_gemma_orpo_dpo
  • Dhananjayg22/legal-dpo

Training Details

Shakti-250M was trained in multiple stages, progressing from general and domain-specific understanding to specialized instruction following and human preference alignment.

Phase 1: Pre-Training

Conducted on large-scale general corpora and domain-specific texts (finance, legal) using a next-token prediction objective. Standard Transformer-based architecture with rotary positional embeddings (RoPE) and mixed-precision training (FP16 + bfloat16) was used to capture both general language patterns and specialized terminology.

Phase 2: Supervised Fine-Tuning (SFT)

Focused on domain-specific instruction datasets in healthcare, finance, and legal sectors. Tasks included medical Q&A, legal summarization, patient-doctor dialogues, and finance-related discussions to improve performance on structured domain tasks.

Phase 3: Direct Preference Optimization (DPO)

Used preference-labeled datasets across healthcare, legal, and finance domains to align the model with preferred user responses. This stage replaced RLHF to enable computational efficiency and ensure high-quality, real-time outputs suitable for deployment on edge devices.

Benchmark Results and Comparison

Shakti-250M delivers strong performance while being smaller and more efficient than many larger models like Boomer-1B and Llama 3.2 1B. Despite using fewer training tokens, it handles a wide range of NLP tasks effectively due to its clean, high-quality training data and optimized learning process.

Key Insight

This model proves that smart training and well-curated datasets matter more than just size. Its lightweight design makes it ideal for real-world applications on mobile devices, IoT systems, and other resource-limited environments.

BenchmarkShakti-250MBoomer-1BBoomer-634MQwen2.5-0.5BSmolLM-360MLlama 3.2 1B
MMLU28.9825.9225.2347.534.432.2
BigBenchHard13.7528.6521.1120.324.430.93
IFEval12.8323.8122.2227.919.859.5
Hellaswag29.9631.6634.0852.151.841.2
Anli33.4032.5727.526.85-22.56
Piqa63.2260.7862.5772.5071.680.64
OpenbookQA16.6022.5635.7630.7337.237
Truthfulqa(MC2)20.6925.6927.5740.2-30.7
WinoGrande52.9745.7951.0756.352.860
ARC Challenge41.2040.7862.5735.650.132.8
SQuAD23.256757.552.94-49.2
Trivia QA1.6825.252.7312.59.125.69
GSM8K2.31.50.9141.6-44.4
MATH21.71-23.3819.5--

Domain Specific Benchmarks

Shakti-250M demonstrates exceptional performance in the healthcare and finance domains, making it a versatile model for domain-specific applications. In healthcare, it excels in tasks requiring complex medical reasoning and shows strong capabilities in understanding and applying clinical knowledge. The model's compact size and efficiency make it an excellent choice for edge devices and IoT deployment in both healthcare and finance applications.

BenchmarkShakti-250MPhi-1.5-1.3BGemma-2BOpt-2.7B
MedQA41.2531.1129.2227.1
MedMCQA34.8734.3130.2225.63
PubMedQA58.2167.866.460.8
MMLU Professional Medicine28.429.0418.0116.54
MMLU Medical genetics31.42422823
MMLU College Medicine30.4537.5731.2124.86
MMLU College Biology31.2534.0333.3320.14
MMLU Clinical Knowledge36.7846.0435.4723.02
MMLU Anatomy39.4239.2637.0432.59
PatronusAI finance-bench-test32.2---
jan-hq finance-benchmark mcq23.1---

Domain Performance Highlights

Healthcare Excellence: Strong performance on MedQA (41.25%) and MedMCQA (34.87%)
Medical Knowledge: Competitive results across MMLU medical benchmarks
Finance Expertise: Leading performance on finance-specific benchmarks
Clinical Applications: Suitable for real-time medical assistance and diagnostics
Financial Tools: Ideal for forecasting and decision-making applications
Edge Deployment: Optimized for resource-constrained healthcare and finance environments

Conclusion

The Shakti-250M is a compact and efficient language model specifically designed for domain-focused applications in finance, healthcare, and legal sectors. With just 250 million parameters, it balances performance and resource usage, making it ideal for use on mobile, IoT, and edge devices. Despite its smaller size, Shakti-250M delivers strong results on domain-specific benchmarks, often outperforming larger models. Its fine-tuned datasets and optimization techniques help it handle complex tasks like legal summarization, financial forecasting, and medical Q&A with high accuracy. Overall, Shakti-250M proves that smart design and focused training can enable small models to excel in real-world, domain-specific applications.