LeanLogix Labs

Research & Benchmarks

Peer-reviewed research from our ML engineering team. Every claim is backed by reproducible benchmarks, open methodology, and production-validated results.

Featured Publications

March 2026

Beyond FP16: Achieving Near-Lossless 4-bit Inference on Commodity Mobile Hardware

K. Chen, A. Petrov, R. Tanaka et al.

We present a novel calibration-aware quantization pipeline that achieves sub-2% perplexity degradation on INT4-quantized transformer models running on Apple A17 Pro and Snapdragon 8 Gen 4. Our approach combines activation-aware weight quantization (AWQ) with recursive distillation feedback loops, enabling deployment of 7B-class intelligence on devices with <6GB unified memory.

QuantizationMobileINT4AWQ

February 2026

Recursive Distillation: Leveraging 405B Teachers for Sub-Billion Student Excellence

M. Okafor, S. Alvarez, J. Kim

We introduce Recursive Feedback Distillation (RFD), a multi-cycle teacher-student training paradigm where the student model is iteratively evaluated against a 405B-parameter oracle across 1,200+ domain-specific benchmarks. After 47 refinement cycles, our 1B-parameter student achieves 97.3% of the teacher's performance on medical NLU tasks while requiring 99.7% less compute at inference time.

DistillationRecursive TrainingSLM

January 2026

Ternary Transformers: The 1.58-bit Frontier for Tactical Edge Deployment

R. Tanaka, D. Hassan, K. Chen

We explore the extreme quantization frontier by applying ternary weight representations ({-1, 0, +1}) to transformer architectures optimized for tactical edge environments. Our 1.58-bit models achieve 89.4% of FP16 accuracy on classification tasks while enabling inference on FPGA-based hardware with power budgets under 5W — critical for disconnected, denied, and degraded (D3) operational contexts.

Ternary1.58-bitEdgeDefense
Live Benchmarks

Logix-Refined vs. Stock Models

Side-by-side comparison on identical hardware (Apple M4 Pro, 18GB unified memory).

ModelVariantSizeLatencyPerplexityMMLU
Llama 3.2-1BStock FP162.0 GB89ms9.1246.2%
Llama 3.2-1BLogix INT40.54 GB14ms9.3145.8%
Phi-4-MiniStock FP167.6 GB210ms6.8472.1%
Phi-4-MiniLogix INT42.1 GB31ms6.9771.4%
Gemma 2-2BStock FP165.0 GB156ms7.4258.7%
Gemma 2-2BLogix INT41.35 GB22ms7.6157.9%
LeanLogix-7B-MedCustom Ternary1.2 GB23ms5.9478.3%
Mistral-7BStock FP1614.5 GB340ms5.3281.1%

Hardware: Apple M4 Pro · 18GB Unified · macOS 15.3 · MLX v0.21

Updated: March 2026