-
Updated
May 4, 2022 - Python
audio-processing
Here are 1,346 public repositories matching this topic...
Guidelines
- I have read the guidelines.
Version/Commit hash
trunk
Describe the bug.
Look at the screenshot in #618:
I’ve highlighted the areas that use a font other than SF UI.
Expected behavior
The whole interface
-
Updated
Feb 7, 2022 - Swift
-
Updated
May 17, 2022 - Python
DALI + Catalyst = 🚀
Thanks for a fascinating library!
Is there some way to put user-written pure Python modules, using numpy of course, into the signal chain?
It would be very desirable to be able to write plugins to Pedalboard as Python functions with an interface like this one of yours.
I certainly have a lot of code
Sampler Graphics
We need some good graphics for the main sampler screen. This is where you can do rudimentary editing of the samples that are played in the sequencer.
There are two screens. The main screen with the controls:
- Volume (this one may be changed later)
- Speed (Playspeed. The sample also pitches up or down)
- Filter (in the middle the sound is unchanged, away from center it engages either hi-pass
-
Updated
Feb 22, 2022 - Python
-
Updated
Jan 18, 2022 - TeX
-
Updated
Dec 5, 2021 - Python
🚀 The feature
Cache PitchShift Resample kernel to improve speed of this transform on second usage.
Motivation, pitch
In porting some augmentation code from librosa
to torchaudio
, I noticed that PitchShift()
and Resample()
are slower than librosa in CPU.
In the case of transforms.Resample()
, this changes on second run as the kernel is cached. But PitchShift does no such cach
-
Updated
Apr 14, 2022 - Go
-
Updated
Apr 15, 2022 - C
-
Updated
Feb 15, 2022 - C++
-
Updated
May 9, 2022 - C++
-
Updated
May 14, 2022 - C
-
Updated
May 17, 2022 - C
-
Updated
Mar 21, 2022 - C#
-
Updated
May 17, 2022 - Python
What?
Currently, API manually throws its own messages and errors. We should move them to werkzeug
exceptions.
-
Updated
Apr 27, 2022 - C#
-
Updated
May 9, 2022 - C++
-
Updated
May 17, 2022 - C++
-
Updated
Sep 6, 2021 - Swift
-
Updated
Apr 28, 2019 - Java
-
Updated
Dec 24, 2021 - Python
-
Updated
Apr 28, 2022 - C++
Improve this page
Add a description, image, and links to the audio-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the audio-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h