November 27, 2002

The Open Video Project

Well, I found the web site Ketan was talking about: The Open Video Project. Unfortunately, the majority of its files are in one of MPEG-1, MPEG-2, or MPEG-4, and there is no raw data available. Probably in large part due to the capacity requires that would be associated with storing the raw frames in full detail. It did surprise me that MPEG-4 versions of the same clip were approximately 10% of the MPEG-2 size. Regardless, this is a dead end.

Posted by josuah at 3:21 PM UTC+00:00 | Comments (0) | TrackBack

Larger TiVo Simulation

I added a bunch more policies to the existing TiVo Nation simulator and am running that right now. I'm guessing this will generate about 25,000 to 30,000 data points. Somehow I have to get all those data points onto graphs. Unfortunately there are about 7 independent variables that are combining to generate those tens of thousands of data points (and that's after taking only average, variance, and standard deviation). That either means a whole lot of graphs, or a single 7-dimensional graph. The latter would be a lot nicer to work with, but impossible it would seem.

Posted by josuah at 3:14 PM UTC+00:00 | Comments (0) | TrackBack

November 26, 2002

NCNI Meeting

Ketan and I attended another NCNI meeting today. Most of the time was spent discussing the path characteristics program a different student is working on. The information gathered by that program would eventually be used by our oracle to model the transmission link.

As for my project, we told people where we are right now. Tyler said he would get us some real video footage in digitized format (hopefully) which we can use to generate our actual test library. Ketan has seen mention of some sort of open video test library which I will look for. That might provide us with some good test footage, assuming it has the video in raw form and not after it has been encoded. Otherwise we would be degrading an already degraded stream. I also need to test out the Win32 version of the VQM Software. This will need to be automated somehow, and that might prove very difficult if the software is entirely GUI based.

Regarding my other project, the TiVo Nation simulator, I need to get a few more graphs, including one where the TiVo-Human cache policy becomes increasingly aware of the total object space. Looks like I'll be generating another several thousand data points. By the time Ketan gets back from his trip/vacation, I should have all those plots finished up so we can just put together the paper for submission.

Posted by josuah at 2:25 AM UTC+00:00 | Comments (0) | TrackBack

November 24, 2002

TiVo Simulator Data

The TiVo Nation simulator finished all the tests in about 10-15 minutes. It generated about 9MB of data. I processed the raw output utility values, dropping the first 24 hours (48 iterations) of the data to allow the cache to somewhat stabilize. I then calculated the average, variance, and standard deviation. At first glance, it seems that the average utility is very close between the two different cache policies tested (TiVo-human and fully-aware approximating) but the variance and standard deviation is much lower with the fully-aware approximating policy. I need to plot some comparative graphs, and also try a third fully-aware approximating compressing cache policy. This policy should show some real gains over the TiVo-human policy.

Posted by josuah at 10:12 PM UTC+00:00 | Comments (0) | TrackBack

November 22, 2002

TiVo Simulator Plugged

I found the memory leak in the C++ version of the TiVo Nation simulator. In Perl I didn't have to worry about the objects that weren't added to the cache, since they would no longer be referenced and automatically garbage collected. Not so, of course, under C++. I had forgot to check if the object was actually added to the cache and if not to specifically delete it. No memory leaks anymore. The simulator is running right now and seems to be using a fixed 672 bytes of memory. In just under 6 minutes, I've gotten over 3MB of data. It would have taken the Perl script an hour or more to generate the same amount.

Posted by josuah at 10:10 PM UTC+00:00 | Comments (0) | TrackBack

C++ TiVo Simulator Leak

I completed the conversion of the TiVo Nation simulator from Perl to C++. But there's a massive memory leak which I'm trying to track down and I also need to actually verify that the logic is correct. That's the problem with a low-level language like C++; things just work and it's a lot easier to express ideas in a high-level language like Perl. On the up side, it's running at least an order of magnitude faster. (As would be expected by losing all of those eval statements in Perl.)

Posted by josuah at 8:01 AM UTC+00:00 | Comments (0) | TrackBack

November 20, 2002

Mac OS X Open Mash Issues

I've been corresponding with Claudio Allocchio, an Open Mash on Mac OS X user, for the past couple of weeks. He's run into a few problems with vic and vat.

When transmitting audio using vat from his iBook the loopback is delayed 0.5 to 0.75 seconds, and a receiving vat running under Linux is delayed up to 3 seconds. That's much larger than the 16 * 20ms ring buffer used, so I'm not sure where that's coming from. I've asked him to print out some ring buffer indices while testing to see if that shows anything.

He's also had a problem with vic crashing with the following error messages when he tries to use the "Mac OS X" source option. I'm guessing this occurs when he doesn't actually have a device attached; I need to add the appropriate checks to the code to ensure choosing "Mac OS X" does not do anything when there is no capture device.

QuickTime error [-32767]: VDCompressOneFrameAsync
QuickTime error [0]: OpenDefaultComponent
Segementation Fault
(crash)

Posted by josuah at 6:07 AM UTC+00:00 | Comments (0) | TrackBack

Running TiVo Nation Simulator

I just finished making my latest revisions to the TiVo Nation simulator and am now running a test for two weeks (672 half-hour iterations) on a box that can hold up to 60 full-size objects. It's dumping the results to what will be a very large text file.

Posted by josuah at 5:53 AM UTC+00:00 | Comments (0) | TrackBack

November 19, 2002

Some TiVo Simulator Curves

I made some modifications to the TiVo Nation simulator code as described in my previous journal entry. I now have three curves for the space and utility curves, but a large number of error curves because what we want to find out is at what point the different policy behaviors cross. I still need to implement the new policies described and fix the cache utility versus actual utility calculations, which I will do tomorrow, and then run the simulator to get some utility plots.

Posted by josuah at 3:28 AM UTC+00:00 | Comments (0) | TrackBack

November 14, 2002

TiVo Nation Discussion

At our meeting today, Ketan and I talked a bit more about what tests to run on the TiVo Nation simulator. To keep the total number of tests down to a minimum, we decided to use one low, average, and high curve for each input parameter. This corresponds to standard, broad, and narrow distributions and linear, fast growth, and slow growth curves. Margins of error will be zero, moderate, and large. He also pointed out that the quality versus space curves should not have any margin of error applied, since it's fairly predictable what size will result from a particular codec given a quality value. I also need to fix the simulator so that although the cache policy looks at the utility with the margin of error, the actual utility values output should not include the margin of error (since the viewer knows exactly what a particular object's utility is).

We also discussed some cache policies. To model an existing TiVo user, we came up with a scheme where the user only has awareness of a certain percentage of the object space. Anything outside this portion of the space will be ignored. Anything inside this portion of the space will be placed into the cache, removing those objects of lessor utility. This models the current TiVo replacement policy which requires the user, who only knows about a certain number of shows, to delete cached shows to make room for a show the user would like to watch. In contrast, a simple automatic cache policy would know about the entire object space, and remove objects of lessor utility to make room in the cache, but would have some margin of error for determining object utility. Comparing the average utility curves as margin of error ranges to the average utility curve as viewer awareness ranges will provide an interesting view of when the automatic cache policy becomes useful. A natural extension of the automatic cache policy is one where objects are compressed to try and maximize the number of objects and overall utility.

Unfortunately, neither of us has any idea of how to even start looking at what sort of distribution, consumption, or utility curves would accurately model real television viewing behavior. The object arrival rate is relatively constant since there are a set number of channels broadcasting at any given time. We can sort of guess as to what sort of curves are appropriate for the other things, but there's nothing we can point at to support our choice of curves. Mark Lindsey had looked into this a little bit, so Ketan suggested I email him to see if he had any idea.

Posted by josuah at 12:02 AM UTC+00:00 | Comments (0) | TrackBack

November 12, 2002

TiVo Nation Simulation Framework Complete

I just fixed the problem I was having with the anonymous subroutines from an external file. So the framework is complete and now all that's left is to actually run some simulation tests.

Posted by josuah at 6:12 AM UTC+00:00 | Comments (0) | TrackBack

TiVo Nation Framework

I just finished up the basic framework for the TiVo Nation simulation script. I figured out some good mathematical equations for representing the exponential and found some for the distribution curves, and verified them using Graphing Calculator by Pacific Tech. By using Perl's eval function it's very easy to simply write these equations out in a separate file and then include them in the simulation.

Unfortunately I've run into a little problem of trying to incorporate anonymous subroutines from an external file into the program. I can't seem to assign an array of anonymous subroutines. I'll have to figure out what's going on there otherwise it'll be harder to include arbitrary policies in the tests.

Posted by josuah at 3:14 AM UTC+00:00 | Comments (0) | TrackBack

November 6, 2002

TiVo Nation

I just got done with my weekly meeting with Ketan. Tyler Johnson did reply to my email but his only response was to schedule another meeting for November 25th at MCNC. So I don't have anything to do between now and then unless the VQM Software is made available for Win32.

So we started talking about what else I might be able to do between now and then. Ketan's had a noticeable obsession with TiVo and a new cache and distribution scheme he calls TiVo Nation. The basic idea is to use a combination of different caching policies based on local degradation with possibilities of reconstruction or P2P caching. In other words, store more things in your cache at lower quality, with the possibility of grabbing stuff out of your neighbors' caches and maybe restoring things to the original quality by combining what you get from other people with what you have.

Anyway, ICME 2003 has put out a call for papers with a December 15 deadline. The paper is only supposed to be four pages. So Ketan's idea was to do some cache policy versus user utility comparisons, given a particular quality/space tradeoff, in an effort to figure out at what point it makes sense to use different caching policies. This is regardless of the P2P or reconstruction aspects described above. This could be simulated with fairly simple mathematical models in a relatively quick manner, and produce enough content for a four page paper.

There are a few different input parameters to consider when putting this model together. There is a quality versus space curve that represents the compression capabilities of an arbitrary codec. There is a quality versus utility curve that represents how much an object is worth to the user at a given quality. There is a content utility distribution (think bell curve) that represents how much an object is worth to the user based on its content. Two additional parameters are needed which are object arrival rate and object consumption rate, representing the object broadcast rate and user's object deletion rate. Some additional variance is added by having the content and/or quality versus utility curves have some margin of error, and by using non-uniform consumption rates.

Anyway, I like Perl and that's what I figure I'll use to put this together. If that proves too slow, then I'll use embedded Perl to compile a C executable that will still let me parse those input parameters extremely easily from plain text files. At least, that's the plan for now.

Posted by josuah at 9:17 PM UTC+00:00 | Comments (0) | TrackBack

November 1, 2002

No Chord Test Collaborator

Well, I asked all the Duke students who did attend today's TONIC meeting if any of them would be interested in running some Chord tests on the Modelnet running at Duke. Not too surprisingly, none of them were interested.

Posted by josuah at 8:20 PM UTC+00:00 | Comments (0) | TrackBack

July 2013
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      

Search