Mid-term evaluation
Hi folks! For the midterm, some of you are wondering what are the states of the students projects. Here is mine.
Do you remember this Topaz mockup ? May be you prefere to watch something that can live ? It's not sexy, but it works (on the cast, some murphy laws) ! Hence I hope the *proof of concept* goal of the midterm is reached :-).
Right now, the "smart" thing is *still* a concept, becoming more concrete in a few days:
- class are kind of queue, with foreground/background state and volume level settings: intraclass conflicts is resolved with this class parameters, except if the user want to override the "smart orders" to the stream (here you can have a look at the s-exp .rule draft, but I don't like it enough for the moment, that's why there is no real code)
- intraclass policy is just a matter of some calls to the "class" object (push everything in background, go back to normal by a simple "pop" in the class)... there is just an interface not implemented for the moment. I have to discuss this further with ensonic.
I have choosen to use something similar to devil's pie for overriding class settings and customization, because of small memory consumption and speed (think about python, gconf, RETE, xml,... they are bad candidates for this purpose)
Pieces of the BOF talk we gave with Lennart are now available. We are taking care of the interaction between Pulse and GSmartMix.
Issues:
- as you can imagine, most of the GNOME applications don't connect to the "notify:volume" changed event, so they don't update their "(bacon)volume" sliders. RB, Totem, SoundJuicer, ... are you ready to accept patch(s) that will connect to this event ?
Another thing would be to catch the state changes in the sink element (if it's "paused" from outside)
- regarding recents discussion about the device settings. Mezcalero, when applications will start to use "gconfaudiosink profile=chat" (as proposed in the patch here #329112), GSmartMix will immediatly benefit from this description, as a sound class description. So it's a good step. The real sink behind is still an autoaudiosink or a gconfsink (I have to tackle the infinite loop if gsmartaudiosink is set as the default gconfsink...)
Now, about the device selection, and the latency and your mail about async./ in-process stream creation. Yes, GSmartMix could provide it's own solution, with a SetDevice(), AskDevice() set of methods. You are right, the latency will be here. The only answer I can formulate is that you should not use these methods if you want an immediate stream creation. But, for "sound events", the only remark I have is that it could be initialized at the start of the application/daemon, so that the sound playback will not have to wait until the server give its device settings.
- it would be cool to have a "focus" event on the top level window, in order for the sound to follow the user, if this option is enabled in a class of sound. Do you have any idea how I can get this event ?
- I don't know what is the best way to get the application icons (libmenu, libwnck, dbus event ala libnotify)
- the code is still in bazaar-ng managed, but I have registered to sf.net to support svn instead


