Skip to content

Latest commit

 

History

History
639 lines (443 loc) · 26.3 KB

Ideas-2025.md

File metadata and controls

639 lines (443 loc) · 26.3 KB

GSoC 2025 Ideas

Project Ideas


Color sensor for Music Blocks for photos and real-time video

Prerequisites

  • Experience with JavaScript
  • Experience with Music Blocks

Description
Music Blocks has a feature to detect the color of pixels generated from drawing within the program, but it cannot detect the color of pixels from images that are either uploaded or from a webcam. By adding a feature to detect color from both uploaded images and a live webcam stream, users would be able to implement Lego music notation for the blind and similarly interactive programs.

The goal of the project is to develop extended functionality to our exisiting tools of turtle/mouse glyph movement and limited color detection to sense color from uploaded images, as well as the real-time feed from a webcam. Upon successful implementation, the turtle/mouse glyph will be able to detect the color of pixels underneath it, regarless of whether those pixels were drawn by the turtle/mouse itself, part of an uploaded image stamped to the canvas, or part of a live webcam video feed into Music Blocks. One test of success is to run our Lego music notation for the blind project with a live feed. The result should be able to playback and record the abstract brick notation based on its contrasting colors.

Project Length

175 hours

Difficulty

Medium

Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri


Interactive AI-powered Chatbot and Debugger for Music Blocks

Prerequisites

  • Experience with Python
  • Experience with Music Blocks
  • Experience with LLMs/Chatbots
  • Experience with AWS
  • Experience with FastAPI

Description
The idea is to enhance Music Blocks with an AI-powered chatbot and project debugger. This feature aims to bridge the gap between users' creative ideas and their ability to troubleshoot or fully utilize the platform's features. The AI chatbot would provide real-time assistance by answering questions, explaining features, and offering creative suggestions, while the project debugger would help users quickly identify and resolve issues in their projects or block connections. This enhancement would make the platform more accessible for beginners while streamlining the debugging and experimentation process for advanced users.

Specifically, we aim to achieve the following:

  • Train an open-source LLM to understand Music Blocks projects and develop the ability to debug them effectively.
  • Implement robust Retrieval-Augmented Generation (RAG) for the LLM model to enhance contextual understanding.
  • Integrate the AI chatbot and debugger into the Music Blocks platform.
  • Develop FastAPI endpoints to deploy the model efficiently.
  • Work on techniques to minimize hallucinations and improve accuracy.

Project Length

350 hours

Difficulty

Hard

Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri


Improve syth and sample features in Music Blocks

Prerequisites

  • Experience with JavaScript
  • Experience with Music Blocks
  • Experience with Tone.JS

Description
Users have two main methods within Music Blocks to play with sound: synths and samples. For our synth, we use tone.js. For samples, we use a .wav binaries and transpose the sound to different pitches. While these features work "well enough", there is still more that can been to make them useful. For this project, a contributor would work closely with their mentors to 1) update the sampler widget, 2) port a list of free/libre/open samples into Music Blocks, and 3) add to the Set Instrument feature and Sampler Widget the ability to assign multiple samples for the same instrument with criteria (e.g. high and low, short and long) for a more natural sound.

Updating the sampler widget will involve updating tone.js to its current version, debugging any issues that updates may cause, and making improvements to the UI/UX of the widget itself.

Porting samples into Music Blocks will require following the directions specified in the Music Blocks documentation to convert a curated list of samples. After completing this, the user-facing menus showing the samples will need to be updated and organized based on instrument type. There is some room to get creative with the presentation of the instruments, perhaps adding icons for each instrument.

The final part of the project is perhaps the most challenging. It will require adding additional functionality so that a user can either upload or record multiple samples of an instrument or voice to be assigned to a custom instrument in Music Blocks. Doing this will make the overall tone of the instruments more persuasive. For example, if the Music Blocks project has short, staccato sounds, the playback can use the short sample created by a recorded instrument.

Project Length

350 hours

Difficulty

Hard

Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri


Generative AI Instrument Sample Generation for Music Blocks

Prerequisites

  • Experience with JavaScript and Python
  • Experience with Music Blocks
  • Experience with Tone.JS
  • Experience with LLMs/neural-networks

Description
For this project, a contributor would work closely with their mentors create an API to a gen-AI to generate samples based on a user prompt.

In order to give users (nearly) limitless options for samples, we are adding to the project's scope a gen-AI-enabled sample generator. A user should be able to prompt a sound font, such as "heavy metal guitar with deep bass" or "soothing clarinet with a crisp attack" and get a result that they can use in their project's code. A contributor will need to extend our sample widget (which currently records audio) to accept a user prompt, create an API to call an LLM/neural-network backend, and test/tweak the gen-AI backend to create an appropriate sample for the user. The results of this part of the project need not be "perfect" by the end of the summer. A solid proof of concept will be sufficient.

In particular, our focus will be on achieving the following objectives:

  • Train an open-source LLM using music-heavy project data to generate sample code.
  • Extend the sample widget to support user prompts for AI-generated sound samples.
  • Develop an LLM-based generative AI backend to produce high-quality, relevant sound samples.
  • Build a high-performance API using FastAPI to streamline interactions between the widget and the LLM.
  • Work on techniques to minimize hallucinations and improve contextual accuracy.

Project Length

350 hours

Difficulty

Hard

Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri


AI Code generation for lesson plans and model abstraction layer

Prerequisites

  • Experience with Python
  • Experience with Music Blocks
  • Experience with LLMs/Chatbots
  • Experience with Fine tuning methods and RAG.

Description
Develop and train an open source Large Language Model to generate Music Blocks project code, enabling integration of code snippets to the lesson plan generator. By implementing a model abstraction layer, the system will remain flexible and model-agnostic, allowing seamless integration of different AI models while maintaining consistent code generation capabilities. This approach ensures long-term sustainability and adaptability as AI technology evolves, while keeping the core functionality of Music Blocks accessible and extensible.

Specifically, we would be working toward accomplishing the following:

  • Train open source LLM to generate code to create new Music Blocks projects.
  • Implement model abstraction layer to make the AI system model agnostic and robust.
  • Increase database size by including more lesson plans and projects' data to get better response related to the projects.
  • Implement Approximate Nearest Neighbor (ANN) algorithms for faster retrieval.
  • Develop FastApi endpoints to deploy the model.
  • Work on techniques to minimize hallucination.

Project Length

350 hours

Difficulty

Hard

Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri


Music Blocks 4 Masonry Module

Difficulty: High (★ ★ ★ ★ ☆)

Project Length: 350 hours

Tech Stack

TypeScript 5, Vitest, Vite

Prerequisites

  • Proficiency in TypeScript and Object-Oriented Programming
  • Experience with writing unit tests using Jest/Vitest
  • Good understanding of the JavaScript Event Loop
  • Understanding of Abstract Syntax Trees (AST)

Description

Music Blocks is a programming platform, and at its core is the execution engine responsible for running Music Blocks programs. This project will focus on building the execution engine and the necessary components to represent and execute Music Blocks programs in-memory.

The project will begin by refining the Object-Oriented program syntax constructs. These constructs will encapsulate the logic for each syntax element and will serve as the foundation for developing a framework to represent Abstract Syntax Trees (ASTs) for Music Blocks programs. Additional utilities will be built to manage instances of these syntax constructs, thus completing the static pieces.

Next, several components will need to be developed to execute the program ASTs, forming the dynamic pieces of the project. Key components include:

  • Parser: Responsible for parsing the nodes of the ASTs in inorder traversal.
  • State Manager: Manages the program state at any given point during execution.
  • Interpreter: Executes individual expressions and instructions.

It’s important to note that Music Blocks programs combine both imperative and declarative constructs. Additionally, some instructions in the programs execute over a time duration, and the programs themselves are multi-threaded. These threads must run concurrently while ensuring proper synchronization.

We currently have a work-in-progress on github.com/sugarlabs/musicblocks-v4-lib, but some design decisions need to be revisited. This project will involve understanding and refining these design choices and completing the remaining components.

The overall objectives are as follows:

  • Collaborate with project maintainers to define all expected functionalities and behaviors, and write a technical specification.

  • Collaborate with project maintainers to develop a concrete execution algorithm, addressing time-based instructions, concurrency, and synchronization.

  • Refine and complete the static components responsible for program representation.

  • Refine and complete the dynamic components responsible for program execution.

  • Write comprehensive unit tests for all components.

  • Focus on optimizing runtime performance.

Mentors
Anindya Kundu

Assisting Mentors
Walter Bender
Devin Ulibarri


Music Blocks 4 Program Engine

Difficulty: Hard (★ ★ ★ ★ ★) TypeScript 5, React 18, Sass, Storybook, Vitest, Vite

Prerequisites

  • Proficiency in TypeScript
  • Proficiency in JavaScript DOM API
  • Experience with React Functional Components and Hooks
  • Familiarity with Storybook and Vitest
  • Familiarity with SVG paths and groups

Description

Music Blocks programs are designed to be built interactively by connecting program constructs, which are visually represented as snap-together, Lego-like graphical bricks. The goal is to develop a module for Music Blocks (v4) that enables the creation of Music Blocks programs.

The project will begin with the development of a framework for generating individual brick components that represent various program syntax constructs. This will be followed by the creation of utilities to represent any program structure through visual connections between the bricks. Next, a component will be built to display all available program bricks, organized into categories, sections, and groups. Finally, a workspace will be developed where users can drag-and-drop, as well as connect and disconnect the program bricks to create their programs.

To draw the bricks, we will use SVG paths, so a solid understanding of SVG path commands is crucial. The development will follow an Object-Oriented Programming approach in TypeScript, with the rendering and management of visual states handled using React Functional Components. A strong understanding of both TypeScript and React is expected.

This project began last year, and you will be expected to build upon the progress made and complete the module.

The overall objectives are as follows:

  • Collaborate with project maintainers to create a design document outlining functional requirements, UI considerations, both high-level and low-level designs, and a technical specification.

  • Develop utilities to generate SVG paths for the bricks based on configurations.

  • Build utilities to represent and manipulate Music Blocks programs in-memory.

  • Develop the four individual submodules outlined above.

  • Write Storybook stories to document and showcase UI components.

  • Implement unit tests for functions and classes using Vitest.

  • Focus on optimizing processing performance.

  • Export a minimal API for integration with other parts of the application.

Mentors
Anindya Kundu

Assisting Mentors
Walter Bender
Devin Ulibarri


Add an AI-assistant to the Write Activity

Prerequisites

  • Experience with Python
  • Experience with Sugar activities
  • Experience with LLMs/Chatbots

Description

Sugar pioneered peer editing in its Write activiy. However, the Write Activity has never had any serious support for grammar correction (just spell check) and none of the more recent developments around AI-assisted writing. The goal of this project is to add AI-assistance to the writing process: both in the form of providing feedback as to what has been written and making suggestions as to what might be written.

The challenge will be both in terms of workflow integration and UX.

Project Length

350 hours

Difficulty

High

Coding Mentors
Walter Bender Ibiam Chihurumnaya

Assisting Mentors


Refactor the Infoslicer Activity to generate plain-language summaries

Prerequisites

  • Experience with Python
  • Experience with Sugar activities
  • Experience with LLMs/Chatbots

Description

The Infoslicer Activity is designed to help teachers extract content from the Wikipedia in order to create lesson plans. This is currently a manual, extractive process. It is well suited to generative AI. The goal would be to have a teacher type in a theme for a lesson and have the AI create a simple lesson plan, which the teacher can then edit.

The biggest challenge to summarization using generative AI is hallucinations. A work-around for this is to include a validation step that surfaces evidence (or lack of evidence) for each assertion in the lesson plan. This will introduce some workflow and UX challenges.

Project Length

350 hours

Difficulty

High

Coding Mentors
Walter Bender Ibiam Chihurumnaya

Assisting Mentors


Refactor the chatbot in the Speak Activity to use gen-AI

Prerequisites

  • Experience with Python
  • Experience with Sugar activities
  • Experience with LLMs/Chatbots

Description

The Speak Activity is one of most popular Sugar activities. It allows someone just beginning to familiarize themselves with reading to interact with synthetic speech. It has both chat and chatbot capabilities, so that learners can share what they type with others, often using invented spelling. It would be a nice improvement if there were a chatbot option to allow a learner to have a conversation with a more modern chatbot -- LLM-based. This would contextualize the learner's experience with writing -- a tool for both self expression and communication.

The project would entail both enabling the LLM chatbot and doing some tuning in order to accommodate invented spelling. Finally, it will be important to create the proper persona, in this case, an adult explaining to a young child.

Project Length

175 hours

Difficulty

Medium

Coding Mentors
Ibiam Chihurumnaya

Assisting Mentors
Walter Bender


GTK4 Exploration

Prerequisites

  • Experience with C
  • Experience with Python
  • Experience with GTK
  • Good understanding of Sugar Core architecture

Project length
350 hours

Difficulty:
High

Description

Sugar 0.120 runs on GTK3 and needs to be ported to GT4, we need to port Sugar and it's core activities to support GTK4 before GTK3 gets to its EOL.

Project Task Checklist

  • Migrate minimal sugar-toolkit-gtk3 components to support Hello World activity, in particular the activity and graphics classes.
  • Migrate Hello World activity.
  • Document migration strategy based on extending any existing upstream GTK3 to GTK4 porting documenta tion.
  • Migrate remaining toolkit components.
  • Extend Hello World to use remaining toolkit components, and rename as a Toolkit Test activity,
  • Migrate Sugar.
  • Migrate the Fructose activity set , as time permits.

Steps to start

Coding Mentors
Ibiam Chihurumnaya


JS internationalization

Prerequisites

  • Experience with JavaScript

Project length
175 hours

Difficulty:
Medium

Description

Our JavaScript activities are using a somewhat antiquated mechanism for internationalization, the webL10n.js library. It does not even support plurals or any language-specific formating. i18next looks like a well-maintained and promising alternative.

This project involves: (a) researching the state of art of language localization for JavaScript, keeping in mind that we are currently maintaining PO files; (b) making a recommendation as to the framework; (c) proposing a path to implementation; and (d) implementing the solution in Music Blocks. (Other JS projects can follow along.)

Project Task Checklist

  • research
  • recommendation
  • plan
  • coding

Coding Mentors
Walter Bender


Sugarizer Human Activity pack

Prerequisites

  • Experience with JavaScript/HTML5 in VanillaJS or with Vue.js
  • Experience with three.js 3D framework
  • Knowledge of 3D tools, capacity to create/combine 3D assets

Project length
175 hours

Difficulty: ★ ★ ☆ (medium)

Description

The objective of this project is to:

  • Finalize the 3D Human Body activity
  • Create a new activity named Stickman Animation


3D Human Body activity
The human Body activity has been started on https://github.com/llaske/sugarizer/tree/feature/humanbody.

Tasks to do:

  • Identify the missing assets for the body layer and the organs layer (only skeleton layer is here today)
  • Integrate these layers in the activity and the way to change layer
  • Implement the shared mode for doctor mode
  • Review the UI for toolbar and popups
  • Localize the activity
  • Suggest other improvements

Stickman Animation activity
Create a new activity to allow creation of animated sequence of a stickman.

The idea of the activity is a "keyframe animation" tool that lets you pose and program a stick figure to rotate, twist, turn, tumble, and dance. The new activity can be integrated into many school subject areas such as creative writing, art, drama, geometry and computer programming. Students can make figures that relate to a subject the class is studying, and share them with peers using collaboration feature. It helps children develop spatial and analytical thinking skills and to express ideas that they might not have words for yet.

Features expected:

  • Put the stickman figure in different poses by moving dots
  • Create and order frames with the different poses created
  • Play/Pause the whole frames
  • Change speed
  • Share and collaborate
  • Export as a video
  • Access to a list of existing fun templates
  • Import a photo of an human body to create a stickman figure in the same pose

Inspirations:

First steps to starts

Mentor
Lionel Laské


Administrative notes

Above are a list of ideas we've planned for GSoC 2025 projects. If you have any ideas which can be useful to us, but are not in the list, we'd love to hear from you. You need not be a potential student or a mentor to suggest ideas.

Criteria for Ideas

  1. Does it fill an empty pedagogy niche in the activity set for Sugar or Sugarizer,
  2. Does it increase quality of our software products (Sugar, activities, Music Blocks, or Sugarizer),
  3. Does it not involve any project infrastructure, e.g. not another app store, web site, or developer landing page,
  4. Do we have a developer now who would be willing and able to do it if a student was not available, and who can promise to do it if a student is not selected; these are shown as a coding mentor,

Coding Mentors

For each idea, we must have offers from one or more coding mentors willing and able to assist students with coding questions.

Requirements for a coding mentor are a demonstrated coding ability in the form of contributions of code to Sugar Labs.

Mentors for a project will be assigned after proposals are received.

Assisting Mentors For each idea, we may have offers from

mentors who do not code willing to assist students in various other ways, such as gathering requirements, visual design, testing, and deployment; these are shown as an assisting mentor.

The only requirement for an assisting mentor is knowledge of the project.

Mentors for a project will be assigned after proposals are received.

Everyone Else

Everyone else in Sugar Labs may also be involved with these projects, through mailing lists, Wiki, and GitHub.

The difference between a mentor and everyone else, is that a mentor is obliged to respond when a student has a question, even if the answer is "I don't know." When a mentor receives a question for which the best forum is everyone else, then they are to respectively redirect the student to ask everyone else. See Be flexible and When you are unsure, ask for help in our Code of Conduct.

Suggested Issues

For some ideas, there is a list of 'Suggested issues to work on'. These may help you to get familiar with the project. The more you work on these issues, the more experienced you will be for the project. However, this is not a strict list. You should try and explore other issues as well.