Fine-tune a Mistral-7b model with Direct Preference Optimization
Boost the performance of your supervised fine-tuned models- 21370Murphy2025-03-22
How ChatGPT is Transforming the Way We Teach Software Development
Learning to code when AI assistants already master the skill- 28663Murphy2025-03-22
A Winding Road to Parameter Efficiency
Deliberately Exploring Design Decisions for Parameter Efficient Finetuning (PEFT) with LoRA- 25043Murphy2025-03-22
SW/HW Co-optimization Strategy for LLMs – Part 2 (Software)
SW is eating the world. SW landscape of LLMs? What are the emerging libraries/SW frameworks to improve LLM performance?- 24141Murphy2025-03-22
A Surgeon's Reflections on Artificial Intelligence
A Clinical Perspective on Medical Innovation- 22123Murphy2025-03-22
Tuning-Free Longer Context Lengths For LLMs – A Review of Self-Extend (LLM Maybe LongLM)
A simple strategy to enable LLMs to consume longer context length inputs during inference without the need for finetuning.- 24884Murphy2025-03-22
Philosophy and data science – Thinking deeply about data
Part 3: Causality- 28099Murphy2025-03-22
2024: The year of the value-driven data person
Growth at all costs has been replaced with a need to operate efficiently and be ROI-driven-data teams are no exception- 21196Murphy2025-03-22
Prompt Engineering, Agents, and LLMs: Kickstart a New Year of Hands-On Learning about AI
The stories that resonated the most with our community in the past month- 28166Murphy2025-03-22
Generative AI is a Gamble Enterprises Should Take in 2024
LLMs today suffer from inaccuracies at scale, but that doesn't mean you should cede competitive ground by waiting to adopt generative AI.- 27068Murphy2025-03-22
Navigating the AI Landscape of 2024: Trends, Predictions, and Possibilities
2024 beckons with a promise of innovation - a year where AI and technology converge to redraw the maps of possibility.- 29909Murphy2025-03-22
How to Cut RAG Costs by 80% Using Prompt Compression
Accelerating Inference With Prompt Compression- 22251Murphy2025-03-22
What Makes A Strong AI?
"The Book of Why" Chapters 9&10, a Read with Me series- 23672Murphy2025-03-22
LLMs for Everyone: Running the LLaMA-13B model and LangChain in Google Colab
Experimenting with Large Language Models for free (Part 2)- 22353Murphy2025-03-22
Future-Proof The Value Of Your Data Science Capability
By integrating data-engineering aptitude- 26359Murphy2025-03-22
What Next? Exploring Graph Neural Network Recommendation Engines
It's so difficult to decide what to watch next. Let's build an AI algorithm to do it for us!- 29009Murphy2025-03-22
Data Science Better Practices, Part 2 – Work Together
You can't just throw more data scientists at this model and expect the accuracy to magically increase.- 23388Murphy2025-03-22
AI-Powered Customer Support App: Semantic Search with PGVector, Llama2 with an RAG System, and…
Enhancing Communication in Global Markets: Leveraging PGVector for Multilingual Semantic Search, Llama2-Powered RAG Systems, and...- 20538Murphy2025-03-22
Why Do Data Teams Fail at Delivering Tangible ROI?
Identifying the popular obstacles of data teams in delivering tangible ROI- 22800Murphy2025-03-22
Methods for generating synthetic descriptive data
Use various data source types to quickly generate text data for artificial datasets.- 26824Murphy2025-03-22
The current state of continual learning in AI
Why is ChatGPT only trained up until 2021?Optimizing Pandas Code: The Impact of Operation Sequence
Learn how to rearrange your code to achieve significant speed improvements.