September 6, 2002

DCT Quality for NCNI (Tyler Johnson)

I actually started thinking about this right off the bat, because I really don't like the idea of having to run actual video or even fake data with video characteristics every single time an end-user wants to test their connection. After all, what's the difference between that and just firing up a video conferencing session anyway? And it also requires both end-points to cooperate and maybe install something. I happen to be one of the few computer geeks who places a really high value on end-user convenience and usability. (I also use Mac OS X as my workstation of choice, which I guess reinforces that.)

What I am currently considering as the best option is to run tests on a simulated network we have control over (I understand there are tools for simulating network characteristics on FreeBSD routers/gateways) for some different video stream types (e.g. H.263+ with common annexes, MPEG-2, H.263, specific tools) and using that to generate a model which represents "quality" based on the network characteristics. Either that, or use probability and the known effects of loss, delay, or jitter on specific bits of data to generate a probabilistic model. The latter is more complex though.

Then, given those models, the end-user could simply run some simple ping-type tests with different packet sizes to figure out the current characteristics of their connection and then match that up with the pre-computed model associated with their codec/application.

Ketan suggested an interesting approach to this where an end-user could point their web browser to a special server that would be able to run these special ping-type tests. This would not require an end-user to even know what pinging means. Of course, this only observes the connection between the end-user and the special server, not between the two end-users who will be communicating. But it's better than nothing. It would also be possible to conduct this test several times each day over a few weeks to generate aggregate characteristics for a specific end-user and let them know what they can expect, average-case (maybe also best- and worst-case).

On Friday, September 6, 2002, at 07:14 AM, Tyler Johnson wrote:

Wesley I did a cursory review of the information you sent out regarding evaluating quality for compressed video. I suspect we are going to face an architectural decision and I would like to get you thinking about it now.

I see two possible approaches to predicting whether a particular network link will perform well for compressed video. The first is to relate network performance to perceptual quality. So we would say, for example, 5% packet loss causes sufficient tiling at target data rates/codecs that most people would regard as unusable. This would be an extremely convenient thing to do. Perhaps there would be a plug-in architecture that let's you insert your own rating scales. The down side is of course that every codec implementation is different and deals with packet loss, jitter, buffering in ways that could render those correllations meaningless.

The other approach would be to actually send sample video and analyze it on the other end. On the surface, this would seem to ensure dead accurate results. The downside is that the tool would be bigger with more components. I also wonder if we generate sample video, do we have the same problem as approach #1 in terms of different codecs behaving differently than the sample.

I would like for you to think about these issues and what the tradeoffs might be to going down either path. Perhaps there is some other approach?

Posted by josuah at September 6, 2002 8:06 PM UTC+00:00

Trackback Pings

TrackBack URL for this entry:


Post a comment

July 2013
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31