Features/suggesions for mokey/mocha?

Hello
I just tried the mocha and mokey demos. I’m not sure if this is the correct place to post some suggestions/questions on features (or if they’re welcome at all, since i only tried the demo and i’m not a customer, at least not yet), but here goes. I was wondering if tracking uses some sort of optical flow, and if there are plans to have retiming, and z-depth extraction, in either mocha or mokey.
Another question, from what i read in creative cow’s a while ago, seems there’s the possibility of adding an FPGA to accelerate some transformations/warping. Any chances of having this done on GPU to accelerate some workflow (via GLSL shaders, or nvidia’s Cuda API/library, for instance). About the tracking itself, if i remember correctly, the Gandalf library is used for tracking, any way to accelerate this via GPU too?
Another question, lots of tools for matte extraction on mokey, primatte for instance. I was wondering if there are any plans to support external keyers, to be more precise, wondering if OFX support would be added (it would allow thefoundry’s keylight keyer for instance, since they have an OFX version).

That’s it, i’m afraid with the amount of features/suggestions i posted you would end up with an entirely new product though, but i was curious for some time now.

Best regards

Sancho Rodriguez

Hey Ross, I have a couple of feature requests, Layer Transparency and Screen as a blend mode option.

Regards.

How about_:

  • a nudge node tool
  • some kind of RAM based preview to see your matte/comp at full speed.

I’m just starting with mocha… Maybe I’ll add more later.

This is certainly the place to get your feature requests listened to! Please feel free to keep them coming?

All interesting suggestions - perhaps most better placed in the monet section which already has composite and keying capability with Primatte.

I can’t really comment on much about the GPU acceleration or OFX other than to say, we are well aware of these issues.

For me - a way to Simulate Depth of Field would be grand -
I’m guessing the z-depth extraction would help reach that goal…