As mentioned in multiple earlier posts to the linux-media mailing list, the soc-camera framework in its present state, as published in this git branch, has converted all its sensor and TV-decoder drivers, except ov6650 and mt9t031, which require a little more work, to be usable with generic V4L2 bridge drivers (thanks again to Hans Verkuil for V4L2 control framework conversion). Next on our roadmap is support for the Media Controller and pad-level APIs. Below are a couple of ideas, how this could be done, without any supporting code yet. The purpose of this post is to formalise my ideas a bit and to give you all a chance to point out any flaws in my concept. Since I haven't so far worked too closely with MC, such flaws are quite possible.
At the moment the soc-camera framework is mostly designed around a model, in which the video subsystem consists of a video capture interface on the SoC, handled as a single block, and one external capture device, like a camera sensor or a TV-decoder, connected to the above interface and additionally controlled over I2C or by some other means. Some extensions to this model, like an addition of further video processing units on the SoC, or further external modules in the video signal path are possible, but are not very well integrated into soc-camera. Examples of such extensions are the CSI-2 controller on sh-mobile and I2C lane shifters, residing on certain mt9m001 and mt9v022 camera modules and used to switch between 10- and 8-bit operation modes, when these modules are used with the PXA270 SoC.
Extending this model to better support multi-entity configurations is also on my TODO list, but is a separate task, therefore in this first step of the MC conversion I will initially address this simplistic 2-point scheme, but try to make design decisions, that would make supporting more complex configurations in the future simple enough.
The actual idea for this first step is to add an ability to support client (sensor and decoder) drivers, implementing the pad-level API, to soc-camera in a native way. This means, without wrapping subdev pad operations in standard video and core subdev operations, but by building a minimum MC implementation on top of existing soc-camera SoC (host / bridge) drivers, ideally, without having to modify them at all. That way, if a standard subdev driver is attached to your SoC, you get a standard V4L2 user-space interface, if your subdev driver implements pad-level operations, you can get a functional MC interface in user space with the same SoC driver.
Such a minimal MC implementation would create two entities: one for the actual video device (for the DMA engine), and one for the external video interface. In this simple case they shall be connected by an immutable link. The external pad will then be linked to the external video device, at least in the typical simple case of only one source pad on it.
Additionally, it should be possible for SoC drivers to implement advanced MC support, while still using parts of the soc-camera infrastructure.
In the most generic case, when both the SoC and the client drivers implement their own MC API support, the soc-camera framework should be made aware of the fact, that the configuration of all the entities in the system can change at any time, bypassing soc-camera interfaces, which means no cached values can be used by the soc-camera core.