December 30, 2002

PVR Paper Submitted

I just submitted "Evaluating the Effectiveness of Automatic PVR Management" to ICME 2003 and FAXed the copyright form to them. Turns out our paper number was 1442 and not 1448 as I thought. So, everything should be all set. Now I'm going to put the paper up here as well.

Posted by josuah at 6:08 PM UTC+00:00 | Comments (0) | TrackBack

December 28, 2002

Evaluating the Effectiveness of Automatic PVR Management

That's the long title of the paper Ketan and I are submitting to ICME 2003. Ketan emailed me earlier today with his revisions and formatting changes, and I did some spellchecking and minor clean-up. Some parts of the paper look kind of squished, but it fits in at 4 pages now.

Unfortunately, I'm having trouble submitting it to ICME 2003; it looks like our submission was deleted since we hadn't uploaded a paper yet. I'm emailing Ketan about this now because I don't know the review category he wanted to submit this too.

Posted by josuah at 7:47 PM UTC+00:00 | Comments (0) | TrackBack

December 24, 2002

ICME 2003 Submission

I just finished up the figures and writing up the model and results sections for the TiVo Nation paper Ketan and I have been working on for the past couple of months. We are submitting it to ICME 2003. The deadline is December 31, so we still have some time to polish it up. Ketan is out of town right now, but he's going to look it over and let me know if there's anything else I need to add or change. If not, we're pretty much all set to go. It's five pages right now and the limit is four pages, so Ketan is going to have to cut some of his introduction fluff out. There's not much I can do to cut down on the parts I wrote. I'll add a link to this paper on my home page and list it on my resume after everything is finalized.

The basic conclusion is that if you are picky enough or don't know about enough of the shows, then an automatic PVR can do much better than you could on your own.

Posted by josuah at 10:42 PM UTC+00:00 | Comments (0) | TrackBack

December 22, 2002

XFIG Plots

With the correct simulation data, I created a 3-D graph of content utility distribution (CUD) 10, decay 0.975, and consumption 8 and the corresponding human-automatic intersection using Gnuplot. I dumped this out to XFIG format. I then made graphs of the CUDs with decay 0.975 and consumption 8, the decays with CUD 10 and consumption 8, and consumptions (2, 4, 6, 8, 12) with CUD 10 and decay 0.975. I'm going to email these off to Ketan to see what his thoughts are.

Posted by josuah at 10:01 PM UTC+00:00 | Comments (0) | TrackBack

Another Error Calculation Error

I found another big problem in the automatic policy utility error calculations. Instead of returning an error value between -1.0 and +1.0 (e.g. +/- 0.05 for 5% error), I was returning error values one hundred times greater. I was coding the error margins incorrectly as integers instead of the fractions they should be. So my +/- 50% error was adding or subtracting up to 50 from the utility value which is only supposed to range from 0.0 to 1.0 (or -0.5 to +1.5 with a 50% error range). So once again, I'm rerunning the simulation.

Posted by josuah at 12:05 AM UTC+00:00 | Comments (0) | TrackBack

December 20, 2002

Rerunning Simulation

Turns out I accidentally had the error utility calculations equal to "utility * error" instead of "utility + error". That might explain why the automatic policy always had the same average real utility despite the error range. So I'm rerunning the simulation with that fix. Once that's done I'll pass things through my extract and split programs, and then the intersect program to find the nicest CUD x decay pair for the 3-D graph and consumption rates.

Posted by josuah at 3:57 AM UTC+00:00 | Comments (0) | TrackBack

December 19, 2002

TiVo Intersect Tool

I just finished writing a quick and dirty Perl script to spit out the (awareness x error) points from a set of human and automatic policy utility data points. Now I'm running larger set of simulations with +/- 5%-50% error in 5% increments and 1%-15% awareness in 1% increments with the existing content utility distributions (CUDs) and decay rates for a consumption rate of 8. Once that's done, I'll pick a good pair of CUD and decay to run with other consumption rates. I haven't decided what other four consumption rates to use, but I think 2, 4, 12, and 16 might be good. Those correspond to a person watching 1, 2, 6, and 8 hours of television a day.

Posted by josuah at 7:30 PM UTC+00:00 | Comments (0) | TrackBack

December 18, 2002

Mac OS X Open Mash Testing

I got around to making some possible bug fixes addressing the performance and driver issues reported by some people regarding Open Mash on Mac OS X.

I made a one-line change to audio-macosx.cc to increment the available bytes counter as each input byte is stored in the ring buffer instead of updating the available bytes all at once after all the bytes have been dumped into the ring buffer. This may or may not improve performance under Mac OS 10.1, but I hope it does. Perhaps the input buffer was really large so all this playing with memory was taking a while. But if it's the resampling and filtering that is causing the slowdown, then this isn't going to make much difference. In that case, I'll have to breakdown the resampling and ring buffer population into two or more iterations. That could be the solution. I've sent my changed file off to Claudio Allocchio and Denis DeLaRoca to see if they notice any improvement under Mac OS 10.1.5.

I also added a VDSetCompression call to video-macosx.cc to hopefully address the change in the IOXperts FireWire webcam driver. I've sent this to Claudio to try out, but also to Paolo Barbato because it might make the IOXperts' USB webcam driver work as well. Apple's API and abstraction is very nice, in that I don't even have to care what's hooked up because everything uses a Video Digitizer. I just have to make sure my API calls are correct. If this change doesn't fix things, I'll have to ask on the IOXperts' beta discussion lists.

Posted by josuah at 11:33 PM UTC+00:00 | Comments (0) | TrackBack

TiVo Utility Graphs

I just got back from a meeting with Ketan where we talked about where to go from here, now that we have some good results. The basic idea is to compare human awareness (X-axis) to automatic error (Y-axis) as a measure of utility (Z-axis). The intersection of the human and automatic curves should create a line that indicates the point where a human does better than a heuristic policy (and vice versa). The idea now is to make three 2-D graphs of human versus automatic for five content utility distribution (CUD) functions, five time decay values, and five consumption rates. And also a 3-D graph providing an example of the X-Y-Z graph I described above.

Ketan's already made quite a bit of progress on the write up. My job now is to create a program that given the two data sets computes the intersection to create the 2-D graphs. The five existing CUDs are good, but we want to get more data points under 10% human awareness, since that appears to be the point where the human policy always does better. So maybe 1%-15% will be our awareness range. And push up error to as much as +/-50% since +/-25% doesn't always seem to show much difference. I also need to find a good fixed point of CUD and decay for the 3-D graph, which will also be the fixed point used for the five consumption rates.

Posted by josuah at 7:56 PM UTC+00:00 | Comments (0) | TrackBack

December 14, 2002

Power Restored

It's been a long time since I last put in an entry because I just got power restored. An ice storm hit Wednesday night last week and I was out of power until now. The short of things is that I haven't worked on the NCNI project since my last entry, but I Ketan and I have done a lot of rethinking and remodeling of the TiVo Nation simulator.

Our first and second attempts ran into problems because we didn't accurately consider several things, and used parameters that really don't matter at all (although they seemed somewhat important before). Right now, I'm changing the model so that we don't care about compression but instead are only interested in at what point of error does a fully aware system perform better than a semi-aware human. We also did not include a utility decay factor in my previous simulations, although I remember we had talked about that a long time ago; that's why I have always had a timestamp attached to objects. Without this decay making old shows less useful, since a user can never watch as many shows as could be recorded, the cache simply becomes saturated at some point.

So, I'm going to do some runs this weekend and meet with Ketan on Monday to discuss things. I'm think we'll get some good results this time. The submission date has been pushed back to December 31, so we have some time left to do this right.

Posted by josuah at 7:37 AM UTC+00:00 | Comments (0) | TrackBack

July 2013
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      

Search