📅 Event Date: September 20th - 22th, 2024

🌟 Welcome, Innovators and AI Enthusiasts!

Think you can push AI to its limits? Step up with Tune AI at PennApps XXV and dive into the challenge of reimagining how we engage with Large Language Models (LLMs).

We’re calling all innovators to create bold, real-world solutions using the power-packed features of Tune Studio. Time to flex those creative muscles and show the world what LLM-driven applications are truly capable of.

🎤 Keynote Presentation

Don’t miss our exciting keynote! Get inspired and learn more about the cutting-edge developments in AI and Tune Studio.

Keynote Slides: TuneAI X PennApps Keynote Slides

Make sure to check out the slides for valuable insights, resources and information that could help you in the hackathon!

🛡️The Challenge: Build Next Gen LLM Applications using Tune Studio

Your mission, should you choose to accept it, is to develop a next-gen application powered by LLMs that addresses real-world challenges in creative, impactful, and practical ways. All participants must utilize Tune Studio—whether it’s using our assistants or leveraging fine-tuning capabilities—to build your app.

How to Participate

  1. Utilize Tune Studio for assistant development or model fine-tuning in your app.
  2. Build something that showcases innovation, practicality, and user impact.
  3. Submit your project via the provided hackathon platform by the deadline.

🏆 Key Judging Criteria

  • Innovation (30%): Is your idea fresh, unique, and innovative?
  • Technical Implementation (25%): How well does your solution utilize Tune Studio and LLMs?
  • Impact (15%): How does your application solve a real-world problem?
  • User Experience (30%): Is the solution intuitive and enjoyable to use?
  • Bonus: Successfully fine-tune an LLM for an assistant and go above and beyond!

🛠️ Main Challenges

🌪️ Challenge 1: RAGForge

Mission: This challenge prompts you to create an AI assistant in which you are required to finetune a LLM on a bunch of data, something unique and fun, which really makes the model broaden its horizons and further add functionality to create Wiki-style pages, retrieve image on fly and customizing the page using chat.

Once you have a wiki-style article on any topic of your choice, your project is expected to keep a library of potential links, reference raw data, and be able to respond to different queries about the wiki page using Retrieval. Further the hack is also supposed to give users the ability to make additions and modifications to the wiki style page using just text instructions. So go ahead and use the valiant mix of Finetuning and AI Assistants from Tune Studio to conquer this challenge.

Scoring Highlights:

  • Strong Document Retrieval Abilities: Accuracy and speed in fetching relevant information from your knowledge base.
  • Contextual Understanding over the Knowledge Base: Ability to understand and respond to queries with pinpoint precision.
  • Text-Based Additions: Seamlessly updating and expanding the knowledge base through conversation.

🧠 Challenge 2: Ensemble of Models

Mission: Your challenge is to create a synthetic dataset and fine-tune multiple LLMs using Tune Studio. The goal is to build an ensemble of models that can work collaboratively, similar to an ensemble learning approach, where multiple smaller models (e.g., 7B parameters) combine their strengths to match or even surpass the performance of a larger model (e.g., 70B).

Each LLM in your ensemble should specialize in different aspects of the task at hand, whether it’s understanding specific domains, handling various types of input, or focusing on different linguistic patterns. By distributing tasks intelligently and coordinating between models, you can maximize context usage and efficiency, all while conserving compute resources.

Your objective is to find the right balance between collaboration and specialization among the models, enabling them to answer user queries with the same versatility and depth that a singular large model might offer. This involves careful model fine-tuning, optimized communication between models, and the strategic use of compute resources to ensure the ensemble operates smoothly without redundancy.

Scoring Highlights:

  • Cognitive Synergy: Seamless collaboration between specialized models for optimal task execution.
  • Intellectual Versatility: Demonstrating proficiency across diverse problem domains.
  • Neural Efficiency: Smart resource allocation and management across the ensemble.

🎯 Challenge 3: PrecisionTuner

Mission:: Your task is to develop a system that fine-tunes LLMs using Tune Studio with maximum efficiency by selecting only the most relevant and impactful data. The goal is to reduce the volume of training data while maintaining—or even improving—the model’s performance.

Instead of training on vast datasets, your system should intelligently identify and select data subsets that have the greatest influence on the model’s ability to perform. This requires careful data selection techniques, such as prioritizing diverse, high-quality examples or key edge cases, to create a more efficient “data diet” for the model.

The challenge is to demonstrate that by refining your data selection process, you can achieve top-tier model performance with significantly less input, proving that in AI, it’s not about how much data you use, but how effectively you use it.

Scoring Highlights:

  • Data Marksmanship: Pinpoint accuracy in selecting the most relevant and impactful training data.
  • Performance Amplification: Achieving significant model improvements with minimal data input.
  • Adaptive Precision: Demonstrating effectiveness across various data types and model architectures.

🧰 Workshop: Mastering LLM Development with Tune Studio

In this 45-minute workshop, we’ll dive into building, fine-tuning, and deploying cutting-edge LLM applications. This is your chance to gain insider tips, learn best practices, and get ready to take on the Tune AI challenge!

Agenda:

  1. Introduction to Tune Studio and its features
  2. Setting up your environment
  3. Building a basic LLM app
  4. Creating your own Assistant in Tune Studio
  5. Fine-tuning models for real-world use cases
  6. Deployment and scaling best practices
  7. Q&A

What You’ll Learn:

  • How to navigate Tune Studio and its advanced features
  • Practical techniques for effective LLM fine-tuning
  • Strategies to optimize and scale your applications

📚 Resources:

We want you to succeed, so we’ve put together the following resources to help you get started:

  1. Tune Studio Documentation: Everything you need to know about using our platform.
  2. Sample Projects: Explore example LLM projects built using Tune Studio.
  3. LLM Fine-Tuning Guide: A step-by-step tutorial for fine-tuning models.
  4. Hackathon Support on Discord: Join our community for real-time support and collaboration.

Whether you’re a seasoned developer or a first-timer, Tune AI is here to help you every step of the way. Let’s create something that pushes the boundaries of what’s possible with LLMs!

Checkout PennApps Event page.