-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Download video(s) (first iteration) #5
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Restructured files; Added parser placeholder * More restructuring * Added basic parser for hydrating template strings * Improved docs * More docs
kieraneglin
changed the title
Download a video (first iteration)
Download video(s) (first iteration)
Jan 23, 2024
* [WIP] Added basic video download method * [WIP] Very-WIP first steps at parsing options and downloading * Made my options safe by default and removed special safe versions * Ran html generator for mediaprofile model - leaving as-is for now * Addressed a bunch of TODO comments
* [WIP] Working on fetching channel metadata in yt-dlp backend * Finished first draft of methods to do with querying channels * Renamed CommandRunnerMock to have a more descriptive name * Ran the phx generator for the channel model * Renamed Downloader namespace to MediaClient * [WIP] saving before attempting LiveView * LiveView did not work out but here's a working controller how about
* Ran a MediaItem generator; Reformatted to my liking * [WIP] added basic index function * setup oban * Added basic Oban job for indexing * Added in workers for indexing; hooked them into record creation flow * Added a task model with a phx generator * Tied together tasks with jobs and channels
* Clarified documentation * more comments * [WIP] hooked up basic video downloading; starting work on metadata * Added metadata model and parsing Adding the metadata model made me realize that, in many cases, yt-dlp returns undesired input in stdout, breaking parsing. In order to get the metadata model working, I had to change the way in which the app interacts with yt-dlp. Now, output is written as a file to disk which is immediately re-read and returned. * Added tests for video download worker * Hooked up video downloading to the channel indexing pipeline * Adds tasks for media items * Updated video metadata parser to extract the title
kieraneglin
added
documentation
Improvements or additions to documentation
enhancement
New feature or request
media downloads
media categorization
labels
Jan 31, 2024
kieraneglin
added a commit
that referenced
this pull request
Mar 29, 2024
* Updated package.json (and made an excuse to make a branch) * Video filepath parser (#6) * Restructured files; Added parser placeholder * More restructuring * Added basic parser for hydrating template strings * Improved docs * More docs * Initial implementation of media profiles (#7) * [WIP] Added basic video download method * [WIP] Very-WIP first steps at parsing options and downloading * Made my options safe by default and removed special safe versions * Ran html generator for mediaprofile model - leaving as-is for now * Addressed a bunch of TODO comments * Add "channel" type Media Source (#8) * [WIP] Working on fetching channel metadata in yt-dlp backend * Finished first draft of methods to do with querying channels * Renamed CommandRunnerMock to have a more descriptive name * Ran the phx generator for the channel model * Renamed Downloader namespace to MediaClient * [WIP] saving before attempting LiveView * LiveView did not work out but here's a working controller how about * Index a channel (#9) * Ran a MediaItem generator; Reformatted to my liking * [WIP] added basic index function * setup oban * Added basic Oban job for indexing * Added in workers for indexing; hooked them into record creation flow * Added a task model with a phx generator * Tied together tasks with jobs and channels * Download indexed videos (#10) * Clarified documentation * more comments * [WIP] hooked up basic video downloading; starting work on metadata * Added metadata model and parsing Adding the metadata model made me realize that, in many cases, yt-dlp returns undesired input in stdout, breaking parsing. In order to get the metadata model working, I had to change the way in which the app interacts with yt-dlp. Now, output is written as a file to disk which is immediately re-read and returned. * Added tests for video download worker * Hooked up video downloading to the channel indexing pipeline * Adds tasks for media items * Updated video metadata parser to extract the title * Ran linting
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
documentation
Improvements or additions to documentation
enhancement
New feature or request
media categorization
media downloads
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What's new?
Adds the ability to add a media profile, attach channels to it, index those channels, and download those videos.
See individual PR histories for a better view of what actually went on because that's way too much to add here.