forked from capjamesg/hugging-face-papers-rss
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathhf_posts.xml
More file actions
2 lines (2 loc) · 12.9 KB
/
hf_posts.xml
File metadata and controls
2 lines (2 loc) · 12.9 KB
1
2
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Hugging Face Posts</title><link>https://huggingface.co/</link><description>This is a website scraping RSS feed for the Hugginface trending posts.</description><generator>rfeed v1.1.1</generator><docs>https://github.com/svpino/rfeed/blob/master/README.md</docs><item><title>By trying to disprove the Omega H2 battery I have discovered;</title><link>https://huggingface.co/posts/AbstractPhil/850797268513183</link><description>By trying to disprove the Omega H2 battery I have discovered; * Each topology formed by the H2 battery is deviant, none have a uniformly shared substrate of behavior. They are each uniquely independent per training set all with perfect recon. * Image recon can be tracked and mapped, yielding a consistently mapped and response 16.77m vocabulary potential. In the current spectrum testing at around 5 million unicode bytes. * The model scale shows patch size is related to how much data you want the model to represent within the model itself, and this has yet to see a capacity to this day. The MSE recons and yields - and the more data fed, the more they yield. * The scaling principle shows that the model indefinitely scales upward and each level of the model can be iteratively captured upward to form deviant and uniformly consistent repeatable pathways of implicit codewise response, not just arbitrary bitwise recall. Meaningful implicit learned utility. * Image recon patch size should...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/AbstractPhil/850797268513183</guid></item><item><title>Multimodal-Edge Demo, a node-based inference canvas demo, is now live on Spaces. It features node-based Transformers for fast inference across 10+ edge-device multimodal models on the Hub, all within a single space. The series includes models from Qwen3.5, Qwen3-VL, Gemma 4, and the LFM 2.5 VL model series, with support for reasoning and grounding tasks.</title><link>https://huggingface.co/posts/prithivMLmods/636036299853646</link><description>Multimodal-Edge Demo, a node-based inference canvas demo, is now live on Spaces. It features node-based Transformers for fast inference across 10+ edge-device multimodal models on the Hub, all within a single space. The series includes models from Qwen3.5, Qwen3-VL, Gemma 4, and the LFM 2.5 VL model series, with support for reasoning and grounding tasks. 🤗 Demo: prithivMLmods/Multimodal-Edge-Node 🔗 GitHub: https://github.com/PRITHIVSAKTHIUR/Multimodal-Edge-Node ✅ Multimodal Apps Collections: https://huggingface.co/collections/prithivMLmods/hall-of-multimodal-apps 🤗 > To learn more, visit the app page or the respective model pages. See translation</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/prithivMLmods/636036299853646</guid></item><item><title>SciCrafter measured something AI practitioners have intuited: frontier agents are improving at executing inside well-framed problems, but lag at framing the problem in the first place.</title><link>https://huggingface.co/posts/salma-remyx/889764886790464</link><description>SciCrafter measured something AI practitioners have intuited: frontier agents are improving at executing inside well-framed problems, but lag at framing the problem in the first place. GPT-5.2, Gemini-3-Pro, and Claude Opus 4.5 all plateaued near 26% on a new Minecraft benchmark for probing AI capabilities in the discovery-to-application loop. So the authors ran targeted interventions: * Hints about what to investigate doubled performance. * A structured experimentation template added 7-14 more points. * Structured consolidation beat free-form summaries by 6 points. * Curriculum context beat independent task-solving. These interventions helped the agent frame what’s worth investigating, and structure what gets learned so it compounds. The bottleneck for AI in scientific workflows is upstream of execution. Their findings are congruent with the design patterns we've adopted at Remyx AI to help AI teams close the development loop scientifically. Agents work well inside structured...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/salma-remyx/889764886790464</guid></item><item><title>Şifahane, a dual-inference medical classification demo, is now live on Spaces. It features side-by-side Turkish BERT and Qwen2.5 architectures for real-time evaluation of the "Classifier vs. LLM" trade-offs, all within a single space. The system utilizes a fine-tuned Turkish BERT for high-speed, cost-effective inference and the Qwen2.5-7B model for flexible multi-task reasoning, with support for department classification, condition analysis, urgency assessment, and rationale generation across 12 medical departments.</title><link>https://huggingface.co/posts/cihatyldz/143573654584943</link><description>Şifahane, a dual-inference medical classification demo, is now live on Spaces. It features side-by-side Turkish BERT and Qwen2.5 architectures for real-time evaluation of the "Classifier vs. LLM" trade-offs, all within a single space. The system utilizes a fine-tuned Turkish BERT for high-speed, cost-effective inference and the Qwen2.5-7B model for flexible multi-task reasoning, with support for department classification, condition analysis, urgency assessment, and rationale generation across 12 medical departments. 🧠 BERT model: https://lnkd.in/dCUUASqq 📊 Dataset: https://lnkd.in/dGK9y24w 🤗 Demo: https://lnkd.in/dtWjCCPF See translation</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/cihatyldz/143573654584943</guid></item><item><title>[DAY TWO] PROJECT CROWFEATHER - 5/1/2026</title><link>https://huggingface.co/posts/Crownelius/763437334277546</link><description>[DAY TWO] PROJECT CROWFEATHER - 5/1/2026 Que sera, what will he be? Step 47,500 of 100,000. Loss hovering around 2.76 on 6.2B tokens. Throughput steady at 87k per second on the A100. Not a GH200, but she gets it done. Still haven't named him. Scamp has a rascally charm. Quentin sounds like he'd wear a bow tie and think hard before speaking. Taking votes. Phase two is what's keeping me up. Datasets everywhere and I can't pick. I'm fusing Google and DeepSeek's ideas: Gemma 4's alternating sliding and global attention, DeepSeek V4's Muon optimizer and WSD scheduler, Gemma 2's logit soft cap, and PaLM's z-loss. Sounds like peanut butter on a hamburger, but the loss curve says it works. Tribe_v2 has real potential but needs more scaffolding than a barn raising before I throw it in. One thing's certain though. This model's gonna be a thinker. Not a Wikipedia parrot. Something that chews before it answers. Finally got a use for my less popular datasets too. Some Opus-4.5-Writing-Style for...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/Crownelius/763437334277546</guid></item><item><title>EARLY SNEAK PREVIEW of our first DeepSeek-V4-Pro dataset, Tachibana 4!</title><link>https://huggingface.co/posts/sequelbox/129216426949569</link><description>EARLY SNEAK PREVIEW of our first DeepSeek-V4-Pro dataset, Tachibana 4! Tachibana 4 is our upcoming agentic coding dataset: - Questions prioritize real-world, challenging agentic coding tasks across a variety of programming languages and topics. - Areas of focus include back-end and front-end development, systems programming, distributed systems, performance optimization, data structures, databases and data engineering, game and mobile development, security engineering, compiler design, custom tooling, task automation, practical bugfixes, and more! - A wide variety of emphasized languages improves development capability: Python, C, C++, C#, Go, TypeScript, Java, JavaScript, Rust, Haskell, SQL, Shell, R, Ruby, assembly code, and more! - Synthethic prompts utilize a variety of personas, experience levels, and styles of communication to maximize real-world flexibility and usability. Get it now: sequelbox/Tachibana4-DeepSeek-V4-Pro-PREVIEW These agentic datasets will power the upcoming...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/sequelbox/129216426949569</guid></item><item><title>Uncensored, Heretic, Qwen 3.6 27B GGUFs - Exceeds all quant metrics and core model metrics too.</title><link>https://huggingface.co/posts/DavidAU/679528728169238</link><description>Uncensored, Heretic, Qwen 3.6 27B GGUFs - Exceeds all quant metrics and core model metrics too. Tuned 27B Heretic Uncensored quants from IQ2M to Q8. IQ2M is 83% of BF16, with Q6 just under 98% of BF16 precision. Q8: 98.47% of BF16 precision. NEO/Code DI-Imatrix Quants. Exceeds all 5 metrics for "censored" quants too. All metrics posted. Tuned model -from which the quants were built- also exceeds Qwen 3.6 27B core metrics too. DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF See translation</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/DavidAU/679528728169238</guid></item><item><title>Day 3 - 05/02/2026</title><link>https://huggingface.co/posts/Crownelius/258416991176465</link><description>Day 3 - 05/02/2026 Scamp ships, hits the wall. New plan... Scamp came back from training today... Didn't go so well, I'm still unsure... Fast benchmark, temperature 0.7, top_p 0.9: - "Capital of France is" produced "covered by the Crown" (grammatical, factually wrong) - "23 + 19 = ?" produced "23. Answer: 23. Answer: 23..." (loops, math broken) - "def fibonacci(n):" produced a list of letters It speaks English. It can't reason. At 8K vocab and 50M params, it was never going to. Next build: 412M MoE-3E. Three experts (math, language, code), top-1 routing, random init, let specialization emerge from gradient signal alone. Tried seeded Branch-Train-MiX first then dropped it. Adds compute for no clear win when the router will find its own attractors anyway. Big lesson today came from limit testing on A100 80GB. Surprise, every planned phase ran out of memory even on 80GB. Root cause: at vocab 262144 (Gemma 3 standard), the output logits dominate during forward and backward. Fix: Liger...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/Crownelius/258416991176465</guid></item><item><title>Experimental global target bits‑per‑weight quantization of Qwen/Qwen3.6-27B and Qwen/Qwen3.6-35B-A3B.</title><link>https://huggingface.co/posts/eaddario/861558852118345</link><description>Experimental global target bits‑per‑weight quantization of Qwen/Qwen3.6-27B and Qwen/Qwen3.6-35B-A3B. Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target. Key Advantages: - VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM). - Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs. Full benchmarks (PPL, KLD, ARC, GPQA, MMLU, etc.) and methodology in the models' cards. eaddario/Qwen3.6-27B-GGUF eaddario/Qwen3.6-35B-A3B-GGUF See translation</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/eaddario/861558852118345</guid></item><item><title>🚀 Two releases this week pushing merge methodology forward.</title><link>https://huggingface.co/posts/ManniX-ITA/825735058318653</link><description>🚀 Two releases this week pushing merge methodology forward. ▶ Qwen3.6-27B-Omnimerge-v4-MLP ManniX-ITA/Qwen3.6-27B-Omnimerge-v4 Same-base DARE-TIES merge of Qwen3.6-27B + 3 fine-tunes (rico03 Claude distill, Esper3.1, kai-os Opus reasoning anchor) via my Omnimerge_v2 method (OBIM-lite + DAREx-q + EMR election). Hit a Qwen3.6-specific fragility: hyperparams that work flawlessly on 3.5 produced 80% unclosed-<think> on 3.6, collapsing pass@1 to ~20%. Per-tensor delta forensics localized the failure to mlp.{gate,up,down}_proj in layers 27–52. Fix: MLP-passthrough surgery — copy MLPs verbatim from base, keep merged attn + linear_attn. Leak → 0%. Q6_K results (vs Qwen3.6 base / vs Omnimerge-v2 on Qwen3.5): • HumanEval: 84.76% (= base, +5.49 pp vs v2) • MBPP corrected: 73.40% (+15.80 pp vs base, ≈ v2) • GPQA Diamond: ~84.75% partial 192/198 (+15.5 pp vs v2) ▶ Qwen3.5-4B Importance-Signal Study (M1..M5) Controlled 5-way comparison: same Qwen3.5-4B base, same 2 fine-tunes (Jackrong Claude-4.5...</description><pubDate>Sun, 03 May 2026 17:50:51 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/ManniX-ITA/825735058318653</guid></item></channel></rss>