Re: [yoshimi-user] Midi learn

  • From: Rob Couto <dbtx11@xxxxxxxxx>
  • To: louis cherel <cherel.louis@xxxxxxxxx>
  • Date: Sat, 11 Jul 2015 01:38:08 -0400

Hello Louis and everyone,

On 7/9/15, louis cherel <cherel.louis@xxxxxxxxx> wrote:

Hi everyone,
In 2011, Alessandro Preziosi (licnep) started working on the midi learn
functionnality, to ease the use of midi controls for every knob on the
interface (right click>midi learn), and to permit the live modification of
a sound, while it is playing, like the LFO frequency.
Unfortunately, his code was not stable enough to be merged to the main
branch.

I am currently working on a new version of this feature, that i think has
been (for my personal use) a real lack of yoshimi/zyn.
For now, I have integrated Alessandro's work on the new version of yoshimi
and I am reworking it to be less of a hack and more of a feature.

I think I will be able to release a working version in late august.

Since Yoshimi is still in active development, I thought it could be
interresting to notice you of this. Moreover, maybe have i missed some
infos where you say you are working on this more or less, so I woudn't
interfere with your work.

I'm glad you asked, even though now a grace period seems to have ended
for me and I have to follow suit :) You didn't miss anything that I
know of, but mainly because until now I haven't announced that I'm
currently working on separating the GUI and SynthEngine to make them
completely agnostic of each others' code. That's just in preparation
for the real fun part-- I intend to work out a new UI or two, to
arrive in a new compile-time option. But since I still need to hook
the UI back to the synth through some kind of wrapper, I realized that
MIDI learning would best be done in the same place at the same time,
so that any UI implementation will work with it and without a lot of
duplicated logic. It also needs controllers that are saved in state or
in parameter files to keep working when they are loaded on the command
line along with --no-gui or when Yoshimi is compiled with
UserInterface=none. ("none" is strictly headless mode, for people who
want or need a build that depends on no toolkit at all, such as an
embedded installation or an external sound engine e.g. for a game.)

At the same time, I believe the thread synchronization can be improved
to banish some more xruns, because the UI and synth won't share
significant data structures and modify each others'-- only messages
about how to update themselves. Andrew's work with asynchronous GUI
messaging seems a good pattern to follow. Even though its operation
currently depends on FLTK, it's not terribly difficult to make it work
both ways and have variations of it that work for all threads-- just a
lot (some hundreds) of small modifications. Ultimately I mean to make
the JACK callback genuinely realtime-safe by replacing the locks.

Now, the reason that is slightly off-topic even though it's the same
*kind* of topic: Is someone else working near these things and would
like *me* to not step on *their* toes? I wanted to get the UI
separation finished and push it somewhere, before I tried to be the
one that would start a conversation, but it couldn't wait any longer.
I've recently pestered Will about this and if I understand correctly,
Andrew already has something similar in mind if not in progress. I
started planning on all that because I have time and I feel like
cooking a whole new UI and this is a good point to pay off or even
cancel some old technical debts. Of course making another UI is
creating more technical debt-- essentially all future user interface
changes would require more work, no matter if each UI is easier to
work on. Thoughts? Comments? If you want to wait (as I intended to)
until I have anything to show, that's quite all right.

Completely back on topic: Thanks, Louis, for checking in. Here is my
position: my position may be irrelevant :) I've been just a casual
tester & minor guerilla contributor with random bits of insight on
occasion. I haven't looked deep into Alessandro's code recently but I
actually planned to recycle at least the Midi Controllers window in
order to get the MIDI learn part of the UI wrapper ready to go with
the FLTK GUI first, and using that to make sure the synth-side
controllers were working well. Only then would I start working out
another entire UI, while anyone who wanted to could test-drive the
intermediate result. For what it's worth, I plan to borrow Calf's GUI
engine because IMO it is Pretty Sweet(TM), copy the current general
layout of Yoshimi, and set up a default theme. Afterward I want to
come up with another, something based on ncurses-- which is why I keep
saying UI and not always GUI.

I believe that whether this matters to you largely depends on whether
my plans matter to anyone else, so that's how the topics meet-- I
wasn't really trying to hijack your thread! Anyway I'm fairly sure
that most of what I'm doing is basically incompatible with most of
Alessadro's work. I looked at it before and adapted it to 1.0.0 but I
was one who said long ago that it would need a lot of work to be
finished, and today I'm preparing to wrap up that effort within a
limited redesign. So, I hope to have an alternate MIDI-learning UI to
play with by the end of this year, and any reusable parts of his
MIDI-learning safely adapted in the original UI before then. Still,
nobody is required to be impressed by this mere summary, and the
Yoshimi team isn't required to pull what I push, and my plans might
not affect you at all. Have to let the seasoned robot fighters speak
up :)

Good luck, have fun...

--
Rob


Other related posts: