Prerequisites
- Experience with JavaScript
- Experience with Music Blocks
Description
Music Blocks has a feature to detect the color of pixels generated from drawing within the program, but it cannot detect the color of pixels from images that are either uploaded or from a webcam. By adding a feature to detect color from both uploaded images and a live webcam stream, users would be able to implement Lego music notation for the blind and similarly interactive programs.
The goal of the project is to develop extended functionality to our exisiting tools of turtle/mouse glyph movement and limited color detection to sense color from uploaded images, as well as the real-time feed from a webcam. Upon successful implementation, the turtle/mouse glyph will be able to detect the color of pixels underneath it, regarless of whether those pixels were drawn by the turtle/mouse itself, part of an uploaded image stamped to the canvas, or part of a live webcam video feed into Music Blocks. One test of success is to run our Lego music notation for the blind project with a live feed. The result should be able to playback and record the abstract brick notation based on its contrasting colors.
Project Length
175 hours
Difficulty
Medium
Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri
Prerequisites
- Experience with Python
- Experience with Music Blocks
- Experience with LLMs/Chatbots
- Experience with AWS
- Experience with FastAPI
Description
The idea to enhance Music Blocks with a chatbot and project debugger. It must fulfill the gap between users' creative ideas and their
ability to troubleshoot or fully utilize its features.
The chatbot could provide real-time assistance—answering questions, explaining features, and offering creative suggestions—while a project debugger helps users quickly identify and resolve issues in their project or blocks. This would make the platform more accessible, especially for beginners and streamlining the process for advanced users to debug and experiment with new features.
Project Length
350 hours
Difficulty
Hard
Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri
Prerequisites
- Experience with Python
- Experience with Music Blocks
- Experience with LLMs/Chatbots
- Experience with Fine tuning methods and RAG.
Description
Develop and train an open source Large Language Model to generate Music Blocks project code, enabling integration of code snippets to the lesson plan generator. By implementing a model abstraction layer, the system will remain flexible and model-agnostic, allowing seamless integration of different AI models while maintaining consistent code generation capabilities. This approach ensures long-term sustainability and adaptability as AI technology evolves, while keeping the core functionality of Music Blocks accessible and extensible.
Specifically, we would be working toward accomplishing the following:
- Train open source LLM to generate code to create new Music Blocks projects.
- Implement model abstraction layer to make the AI system model agnostic and robust.
- Increase database size by including more lesson plans and projects' data to get better response related to the projects.
- Implement Approximate Nearest Neighbor (ANN) algorithms for faster retrieval.
- Develop FastApi endpoints to deploy the model.
- Work on techniques to minimize hellucination.
Project Length
350 hours
Difficulty
Hard
Coding Mentors
Walter Bender
Assisting Mentors
Devin Ulibarri