May 22, 2025
Dear Readers,
Welcome to the 10th edition of Fine-Tuned by Genloop! We're releasing a bit later than usual thanks to an exciting series of events—Google I/O and Microsoft Build—plus a collaboration we are thrilled to share. We're delighted to announce our partnership with Axtria, a leader in AI-powered analytics for life sciences. Coupling Axtria’s deep knowledge in pharma with Genloop’s capabilities of personalizing LLMs, we'll be powering the new force of Agentic AI tailored for life sciences organizations. More details below, with additional updates coming in future editions!
This week's highlights showcase our top picks from Google I/O and Microsoft Build, featuring Google's Veo 3 launch and Microsoft's new enhanced co-pilot offering. We also explore Anthropic's legal citation mishap, dive into insights on custom LLMs and cutting-edge research in reasoning models.
Let's dive in!
🌟 AI Industry Highlights
Google Pushes AI Mode at I/O 2025
At I/O 2025, Google showcased major advances in generative and agentic AI, introducing new models across video, text, and image while demonstrating enhanced reasoning and contextual capabilities.
Key highlights:
Veo 3, Gemini 2.5 Pro with Deep Think & Diffusion Model: Google launched Veo 3 for photorealistic, sound-synced video generation, and unveiled Gemini 2.5 Pro with a new Deep Think mode, capable of multi-path reasoning, outperforming OpenAI's o3. They also introduced a new diffusion model that generates text responses through diffusion.
Project Astra Advances: Google's real-time agent, Project Astra, can now interpret live camera input and engage in contextual, spoken interaction, transforming smartphones into agentic companions that "see" and respond intelligently to their environment.
AI Mode in Search: The new AI-powered Search Mode transforms how people search by providing summarized answers, exploratory guidance, and personalized insights for tasks like shopping or planning, moving beyond traditional link-based results.
These updates are truly impressive, and Google has clearly recaptured both its technological lead and the world's attention. After all, attention is all you need!

Microsoft Unveils Agentic AI Vision at Build 2025
At Build 2025, Microsoft spotlighted its vision for agentic AI—unveiling tools and protocols that enable developers to build collaborative, task-oriented AI agents and standardize interactions across the web.
Key highlights:
Agentic AI in Copilot Studio: Microsoft introduced multi-agent orchestration, allowing developers to compose multiple AI agents with distinct roles and capabilities into a single workflow.
Integration with Open Agentic Protocols: Integration with Model Context Protocol (MCP) and NLWeb standards help interoperability and structured use of AI agents.
Microsoft Discovery Platform: A new agent-powered research environment designed to support scientific workflows.

Anthropic Lawyer Apologizes for Claude's Hallucinated Legal Citation
An Anthropic lawyer admitted to using an incorrect citation generated by the company's Claude AI chatbot during an ongoing legal battle with music publishers, as revealed in a Northern California court filing.
Key highlights:
Citation Error: Claude created "an inaccurate title and inaccurate authors" that slipped through the team's manual citation verification
Legal Response: In their formal apology to the court, Anthropic characterized it as "an honest citation mistake and not a fabrication of authority"
Broader Context: This incident adds to a growing list of AI-related legal mishaps, including a California judge's recent condemnation of "bogus AI-generated research"
Despite these setbacks, legal AI startups continue to thrive, with Harvey reportedly securing $250 million in funding at a $5 billion valuation to develop AI tools for legal work.
✨ Genloop Updates
Genloop x Axtria : Powering Agentic AI in Life Sciences
We’re excited to announce our collaboration with Axtria to deliver domain-trained large language models (LLMs), designed to address the unique needs of life sciences organizations.
By enhancing the Axtria’s InsightsMAx.ai platform with Genloop’s specialized LLMs, we’re helping companies adopt Agentic AI that’s accurate, scalable, and enterprise ready.
Here’s what this means for life sciences companies:
Deep contextual understanding of industry-specific language and workflows
Secure, compliant deployment with privately hosted models
Adaptive learning from real-world task feedback
Scalable integration with existing enterprise systems—at a predictable cost
This collaboration brings the next generation of intelligent, task-oriented AI to life sciences—turning complexity into clarity.

Join Research Jam #5: ReasonIR Deep Dive
Don’t forget that Research Jam #5 is around the corner, happening on May 29, where we'll dive into "ReasonIR - Training Retrievers for Reasoning Tasks" - the top research paper on LLM Research Hub for the first week of May, 2025.
Spots are limited, so register today to secure your place!

📚 Featured Blog Post
We've got a fascinating read that showcase how the AI landscape is evolving:
The Intelligence Edge: Why Custom LLMs Go Beyond Privacy
Earlier this month, we shared a new perspective on the strategic value of custom LLMs, which moves beyond traditional privacy concerns to explore how personalized AI creates genuine competitive advantages for enterprises.
Key insights:
Beyond Infrastructure: Custom LLMs create behavioral alignment, not just secure processing locations
Strategic Differentiation: They deliver contextual precision and brand consistency as a semantic layer over company knowledge
Competitive Moats: Private models understand organizational workflows, while public models only know the internet
The piece argues that custom LLMs create embedded cognition and institutional memory rather than simple API interfaces.

🔬 Research Corner
Check out our latest Top 3 Papers of the Week [May 12 - May 16, 2025]. Each week, our AI agents score the internet for the best research papers, evaluate their relevance, and our experts carefully curate the top selections. Don't forget to follow us to stay up to date with our weekly research curation!

Now, let's deep dive into the top research from the last three weeks:
ParScale: Parallel Scaling Law for Language Models
Qwen team introduces ParScale, an innovative scaling approach that enhances language model performance through parallel computation streams without significantly increasing memory or latency costs.
Key findings:
Parallel Computation Design: Uses P learnable transformations to create multiple computation streams with output aggregation, achieving efficiency gains comparable to parameter scaling by O(logP)
Resource Optimization: Delivers up to 22× less memory overhead and 6× less latency compared to traditional parameter scaling for equivalent performance improvements
Two-Stage Training: Combines traditional pre-training with focused post-training on small data subsets, enabling efficient recycling of existing pre-trained models
The research demonstrates significant potential for deploying advanced language models in resource-constrained and edge computing environments while maintaining performance.
Read Our TuesdayPaperThoughts analysis

Llama-Nemotron: Efficient Reasoning Models from NVIDIA
NVIDIA's latest Llama-Nemotron series introduces reasoning capabilities to Llama models through dynamic control mechanisms and multi-stage training pipelines.
Key findings:
Dynamic Reasoning Toggle: Models offer real-time switching between standard chat and reasoning-intensive modes during inference
Five-Stage Training: Advanced pipeline including neural architecture search, knowledge distillation, and large-scale reinforcement learning
Open-Source Release: Complete package with model weights, training code, and curated datasets for math, coding, and STEM tasks
The study demonstrates that Llama-Nemotron Ultra achieves competitive reasoning performance with models like DeepSeek-R1 while maintaining superior inference efficiency.
Read Our TuesdayPaperThoughts analysis

ReasonIR: Training Retrievers for Reasoning Tasks
Researchers introduce ReasonIR-8B, the first retriever specifically designed for reasoning-intensive information retrieval and RAG applications, addressing limitations of traditional factual query retrievers.
Key findings:
Synthetic Training Pipeline: ReasonIR-Synthesizer creates diverse reasoning-intensive queries with hard negatives for improved reasoning task performance
BRIGHT Benchmark Leadership: Achieves 29.9 nDCG@10 without reranker and 36.9 with reranker, plus significant RAG gains on MMLU and GPQA
Efficient Test-Time Scaling: Outperforms larger reranker models while using 200× less compute for complex reasoning queries
The study demonstrates that specialized reasoning retrievers can significantly improve performance on complex tasks compared to retrievers optimized for short factual queries.
Read Our TuesdayPaperThoughts analysis

Looking Forward
While new models are pushing boundaries daily, recent incidents at Anthropic, Cursor, and others highlight how far we still have to go. Yet we remain optimistic—as more capabilities become publicly available through open-source initiatives and research, we'll see increasingly refined and sophisticated applications.
That said, if you're looking to harness the power of LLMs for your use case and hitting roadblocks, we're all ears. Simply reply to us or schedule a demo to start the conversation!
About Genloop
Genloop empowers enterprises to deploy GenAI in production with agents that understand business know-how and processes. We help companies build personalized LLMs that deliver superior performance, control, and simplicity—ideal for use cases like chatting with enterprise databases and transforming click-based workflows into conversational interfaces. Visit genloop.ai, follow us on LinkedIn, or reach out at founder@genloop.ai to learn more.