May 24, 2012

The Importance of Development Documentation

Overview

Lately I've found myself harping on the importance of documenting code, program execution, and SCM items (i.e. JIRA issues and Perforce changelists). Documentation can be a controversial topic, particularly when mixing people from opposite camps on the subject. It has even been referred to as a philosophical difference.

Typical arguments against producing documentation for internal consumption tend to fall into the following two categories:

  • The documentation is superfluous with respect to the source code.
  • The resources spent producing documentation are better spent elsewhere.

While I could continue to espouse the benefits of good documentation, in many ways the discussion reduces to a disagreement along the lines of he-said/she-said. So instead of proselytizing I will instead provide scientific evidence in support of documentation. It is not a difference of philosophy.

The majority of evidence presented here applies to software developers but the analogous benefits apply o any persons involved in the development process including QA, technical writers, and anyone else that may need to synthesize information about the product. Only evidence indicated as statistically significant is included.

This essay will not cover the benefits of clean code although those benefits may be discussed in the referenced papers. For more on clean code, see Clean Code: A Handbook of Agile Software Craftsmanship [Google Books] or Writing clean code [IBM developerWorks].

The Importance of Comprehension

To identify the value associated with the variable or attribute of a scientific experiment a metric must be defined. For documentation that metric is comprehension and the resulting benefits of improved comprehension.

Debugging Efficiency

Leo Gugerty and Gary M. Olson. 1986. Comprehension Differences in Debugging by Skilled and Novice Programmers. In Papers presented at the first workshop on empirical studies of programmers on Empirical studies of programmers, Elliot Soloway and Sitharama Iyengar (Eds.). Ablex Publishing Corp., Norwood, NJ, USA, 13-27.

Gugerty and Olson conducted an experiment to determine differences in debugging skill between novice and expert programmers. Experts were able to identify and fix the programs in less than half the time (18.2m/17.3m for novices, 7.0m/9.3m for experts), with fewer attempts (4.5/2.2 for novices, 1.9/1.1 for experts), and with less probability of introducing new bugs (23%/30% for novices, 17%/0% for experts). Results indicated this was in large part due to generating high quality hypotheses with less study of the code primarily due to their superior ability to comprehend the program.

Murthi Nanja and Curtis R. Cook. 1987. An analysis of the on-line debugging process. In Empirical studies of programmers: second workshop, Gary M. Olson, Sylvia Sheppard, and Elliot Soloway (Eds.). Ablex Publishing Corp., Norwood, NJ, USA 172-184.

Nanja and Cook studied differences in the debugging process of expert, intermediate, and novice programmers and measured their performance when debugging. Their results support the conclusions of Gugerty and Olson's study: experts relied on superior program comprehension to fix bugs faster (19.8m for experts, 36.55m/56.0m for intermediates and novices) with less code changes (8.83 LOC for experts, 10.33/23.16 LOC for intermediates and novices) and without introducing as many new bugs (1 for experts, 2.33/4.83 for intermediates and novices).

Robert W. Holt, Deborah A. Boehm-Davis, and Alan C. Shultz. 1987. Mental representations of programs for student and professional programmers. In Empirical studies of programmers: second workshop, Gary M. Olson, Sylvia Sheppard, and Elliot Soloway (Eds.). Ablex Publishing Corp., Norwood, NJ, USA 33-46.

Holt et. al. examined the correlation between a programmer's perceived difficulty and complexity of code on that programmer's debugging performance. They found a small but significant correlation between debugging time/attempts and the difficulty in finding information (0.235/0.184/0.237) and the difficulty in recognizing program units (0.291/0.177/0.205). A somewhat less significant correlation was found between difficultly in working with the code and time to debug (0.210) and between program formatting being too condensed and number of debugging transactions (0.197).

Poor comprehension increased the time to fix bugs and correlated with the introduction of new bugs or incorrect fixes.

Systematic Understanding

David C. Littman, Jeannine Pinto, Stanley Letovsky, and Elliot Soloway. 1987. Mental models and software maintenance. Journal of Systems and Software. 7, 4 (December 1987), 341-355. DOI=10.1016/0164-1212(87)90033-1 http://dx.doi.org/10.1016/0164-1212(87)90033-1{info}

Littman et. al. analyzed the development process of experienced programmers tasked with modifying a program and identified two categories for understanding programs.

  1. Systematic developers trace data and control flow to understand global program behavior. The programmer detects causal interactions between program components and designs a modification taking these interactions into account.
  2. As-needed developers limit the scope of their understanding to the code that must be modified to implement the change. Data and control flow and interactions that may be affected due to the modification are unlikely to be found.

In their experiment all five developers who used the systematic strategy successfully modified the program while all five developers who used the as-needed strategy failed to modify the program correctly.

Failure to understand global program behavior and interactions between components resulted in incorrect implementation every time.

Code Reuse

Hoadley, C.M., Mann, L.M., Linn, M.C., & Clancy, M.J. (1996). When, Why and How do Novice Programmers Reuse Code? In W. Gray & D. Boehm-Davis (Eds.), Empirical Studies of Programmers, Sixth Workshop (pp. 109-130). Norwood, NJ: Ablex.

Among developers who are pre-disposed towards code reuse, comprehension influenced both the frequency of and form of reuse. Two mechanisms of reuse were examined:

  1. Direct is reuse of a function by calling it from new code.
  2. Cloning is copying code out of an existing function into new code.

An abstract understanding of functions resulted in 65% reuse (both direct and cloned) while only an algorithmic understanding resulted in 12% reuse. Misunderstood functions had low direct reuse of 5% but were reused by cloning 40%.

Code that is not well understood is less likely to be reused. Code that is misunderstood is likely to result in incorrect code.

Improving Comprehension

Beacons

Beacons are key features in code that indicate the presence of a structure or operation and strengthen the reader's hypothesis of functional behavior. They serve as shortcuts towards comprehension; failing to recognize a beacon requires a developer to spend additional time on comprehension.

Susan Wiedenbeck. 1986. Processes in Computer Program Comprehension. In Papers presented at the first workshop on empirical studies of programmers on Empirical studies of programmers, Elliot Soloway and Sitharama Iyengar (Eds.). Ablex Publishing Corp., Norwood, NJ, USA, 48-57.

Wiedenbeck's experiments found that experienced programmers were able to recall 77.75% of the beacons versus 47.50% of the non-beacons in the code while novices only recalled 13.83% of the beacons and 30.42% of the non-beacons.

Martha E. Crosby and Jean Scholtz and Susan Wiedenbeck. 2002. The Roles Beacons Play in Comprehension for Novice and Expert Programmers. In Programmers, 14th Workshop of the Psychology of Programming Interest Group, Brunel University. 18-21.

Comment beacons indicative of functionality are quickly processed by experienced programmers. Pure code beacons (i.e. important lines of code) require more time to process and might benefit from comprehension aids.

Edward M. Gellenbeck and Curtis R. Cook. 1991. An Investigation of Procedure and Variable Names as Beacons During Program Comprehension. Technical Report. Oregon State University, Corvallis, OR, USA.

Gellenbeck and Cook found that meaningful procedure and variable names resulted in higher rates (52% and 74%) of correct behavior identification compared to combinations with neutral procedure and variable names. However this still shows a large percentage of incorrect identification (48% and 26%) for undocumented source code.

Add documentation beacons (comments, mnemonic hints, or whitespace and formatting) to highlight important operations and logical concepts to speed up comprehension time and ensure proper comprehension.

Plausible Slot Filling

Stanley Letovsky. 1986. Cognitive Processes in Program Comprehension. In Papers presented at the first workshop on empirical studies of programmers on Empirical studies of programmers, Elliot Soloway and Sitharama Iyengar (Eds.). Ablex Publishing Corp., Norwood, NJ, USA, 58-79.

Plausible slot filling is an attempt to explain an unknown based on existing incomplete knowledge. It is a result of [abductive inference|http://en.wikipedia.org/wiki/Abductive_inference] (i.e. guessing) where one tries to explain something through reversed logical deduction. In other words:

if "Q" and "P implies Q" then "maybe P"

The deduction may be incorrect. In Letovsky's experiment a developer incorrectly guessed that a memory allocation within a database function was for a database record. In another example the developer did not immediately understand why only six elements were displayed when the record array contained seven elements.

Document background information and the purpose of code to prevent incorrect conclusions, even when the issue appears isolated or minor.

Program-Dependent Items

Mark Thomas and Stuart Zweben. 1986. The Effects of Program-Dependent and Program-Independent Deletions on Software Cloze Tests. In Papers presented at the first workshop on empirical studies of programmers on Empirical studies of programmers, Elliot Soloway and Sitharama Iyengar (Eds.). Ablex Publishing Corp., Norwood, NJ, USA, 138-152.

A cloze test is a comprehension and vocabulary test where words are removed from a larger body of text. Removed items fall into one of two categories:

  1. Program-independent items can be resolved correctly without understanding the functionality (e.g. by process of elimination or to meet compilation requirements).
  2. Program-dependent items require functional understanding for correct resolution.

In the tests conducted by Thomas and Zweben cloze test error rates for program-dependent items were 41.14%/32.11% while only 12.41%/5.75% for program-independent items. Stated differently, participants had a much harder time deciphering the correct meaning of the code when lacking program-dependent information.

Document considerations (global, external, state) to reduce the chance of incorrect conclusions due to missing context.

Abstract Comprehension

Hoadley, C.M., Mann, L.M., Linn, M.C., & Clancy, M.J. (1996). When, Why and How do Novice Programmers Reuse Code? In W. Gray & D. Boehm-Davis (Eds.), Empirical Studies of Programmers, Sixth Workshop (pp. 109-130). Norwood, NJ: Ablex.

Experiments found that students having difficulty summarizing code were less likely to reuse code. Additionally, abstract comprehension resulted in 65% function reuse versus 12% function reuse with only algorithmic comprehension. Code that was not understood either abstractly or algorithmically was cloned 40% of the time which likely resulted in incorrect code.

Documentation should be written towards both abstract and algorithmic comprehension to increase code reuse and prevent incorrect code cloning.

Encouraging Documentation

While the benefits and mechanisms of improved development documentation may be clear, it is also important to take action that will result in the production of this documentation.

Herb Krasner, Bill Curtis, and Neil Iscoe. 1987. Communication breakdowns and boundary spanning activities on large programming projects. In Empirical studies of programmers: second workshop, Gary M. Olson, Sylvia Sheppard, and Elliot Soloway (Eds.). Ablex Publishing Corp., Norwood, NJ, USA 47-64.

Krasner et. al. conducted an informal analysis of the communication issues affecting large programming projects and identified areas in which the culture and environment discouraged effective communication. These areas include communication skills, incentive systems, representational formats, rapid change, jargon, information overload, scheduling pressure, and peer/management expectations.

Encouraging the production of documentation and effective communication must be accomplished through a combination of peer pressure and management behavior.

  • Hire or foster developers with high communication and technical competence who exhibit an attitude of egoless programming.
  • Reward documentation, communication, and long-term goals instead of short-term performance.
  • Use similar/standard documentation formats and minimize the use of jargon.

Posted by josuah at 12:55 AM UTC+00:00 | Comments (0) | TrackBack

April 11, 2012

Software Development Best Practices

Here are some development process best practices and their benefits. Of course, best practices are generally good ideas but are not rules.

Document While Coding

Comments explaining the purpose and operation of code should be written while coding. Special cases and unexpected workarounds should be fully explained. Functions should have their behavior, input and output parameters, return values, and any special considerations documented when declared.

  • Helps ensure design and implementation are fleshed out.
  • Encourages developers to identify edge and error cases up front.
  • Documents intention and purpose versus implementation, which prevents bugs when coding or refactoring.
  • Educates other developers about why something was implemented a specific way, which they can apply to their own code and future development.
  • Simplifies maintenance and refactoring by ensuring behaviors and functional contracts are maintained.

Refactor Now Instead of Later

When adding new code or making changes to existing code, spend the time and effort to refactor now instead of later in exchange for completing sooner. This may require you to touch more code than you would like or make minor changes to code unrelated to your specific change or feature addition. Do not copy/paste code to avoid refactoring.

  • Prevents duplicate code or logic; all duplicates need to be found in the event of a bug or feature addition.
  • Making minor changes to reuse existing code decreases the amount of new untested code.
  • Avoids duplication of effort when refactoring occurs later.
  • Leverages tests of existing code and minimizes the prevents duplicate test effort for duplicate logic.

Unit Test Before Committing

A bug fix or new feature should not be considered complete until unit tests for that code are also complete. Unit tests should exercise both correct and erroneous code paths and incorporate stress tests if possible.

  • Regression tests already exist in the event of future code changes.
  • Ensures code is functionally and logically correct before it is made available to other developers.
  • Encourages developers to identify edge and error cases up front.
  • Encourages developers to consider the use cases and usability of new code up front.

Write Functional Code

Write functional code in favor over procedural code. Functional code is where the return value and any output parameter values only depend upon the input parameter values. Procedural code depends on information other than the input parameters or has side-effects that affect other code.

  • Ensures changes to the implementation of a function do not violate the behavior expected by callers.
  • Function behavior changes require the parameters to change, forcing all callers to update their use of the function and therefore be aware of the behavior change.
  • Prevents multiple threads from accidentally interfering with each other due to shared state.
  • Functional code is by definition thread-safe.

Write Self-Contained Code

Do not require users of a library, module, or class to understand the inner workings or state of that class. This means all state must be private, memory management must be internally managed, and all synchronization must be internally implemented.

  • Prevents external code dependencies or logic from requiring changes in the event of a bug or feature addition.
  • Prevents memory leaks due to unclear ownership of allocated memory (smart pointers can be used to transfer ownership).
  • Ensures state cannot be tampered with or made accidentally inconsistent.
  • Ensures synchronization logic is properly scoped and balanced.
  • Allows modules, libraries, or classes to be declared thread-safe.

Create Immutable Objects

Immutable objects have all state information set upon construction and this information cannot be changed. Whenever possible, objects or state containers should be made immutable. Changes to state information require the creation of a new instance.

  • Guarantees internal state cannot become inconsistent.
  • Immutable objects are thread-safe.

Resource Acquisition Is Initialization

Use resource acquisition is initialization, or RAII. Acquire resources and initialize instances in constructors and release resources and cleanup in destructors.

  • Promotes exception-safe and thread-safe code.
  • Ensures initialized state before use and automatic cleanup when instances are destroyed.

Use Detailed Changelist Descriptions

When committing a code change, a detailed description of the change should be included. This description should include how the change was implemented, what problems are fixed or the new feature added, and in the case of a fix why the previous code was erroneous.

  • People can understand why the code change was done the way it was done.
  • A changelog can be produced easily from changelist descriptions.
  • Repository watchers will receive a notification email that contains all the necessary information.
  • Helps inform testing effort for the code change.
  • Educates other developers about the specific issue, which they can apply to their own code and future development.

Explain Bugs

In some cases the reporter can fully describe the root cause of a bug, but in many cases only the resulting erroneous behavior is known. When the root cause of a bug is uncovered, it should be fully documented in the bug tracker. (When appropriate, this may be handled by copying the source control changelist description into the bug tracker issue.)

  • People (including the reporter) can understand what went wrong.
  • Helps inform testing effort for the specific bug or related areas.
  • Educates other developers about the specific issue, which they can apply to their own code and future development.

Explain Non-Issues and Won't Fixes

When a bug or feature request is marked non-issue or won't fix, a detailed explanation of why it is not an problem or won't be fixed should be included. Any business or technical reasons should be included.

  • Educates developers on intended behavior or technical limitations which they can apply to their own code and future development.
  • Educates testers on intended behavior or technical limitations that can inform test efforts.
  • Prevents others from re-opening the bug or request, or filing duplicates.

Posted by josuah at 8:22 PM UTC+00:00 | Comments (0) | TrackBack

November 12, 2011

Intuit GoPayment's Invasive Signup Procedure

I started looking into Intuit GoPayment which has a service offering very similar to that of SquareUp.

There are some minor differences in the general fee schedule with GoPayment offering slightly better rates for those who pay a recurring monthly fee but slightly worse rates than SquareUp for those who do not. GoPayment's American Express fee is also higher than that of SquareUp. In terms of fees, I think businesses with more volume that primarily this as their payment method would come out ahead with GoPayment.

However I strongly advise against any business actually signing up with GoPayment.

The GoPayment web site has a signup flow but it only works for individuals. It will ask for your personal social security number. I wanted to open a business account with them using my Federal EIN and business banking accounts. That's when things got ugly.

In order to sign up my business with my EIN there were two primary requirements which were that I own at least 50% of the business and that I am over 18 years of age. I'm not entirely sure how Intuit will handle some businesses where there are multiple owners. Maybe it won't be a problem as long as a majority stake signs some paperwork and they use the business' EIN. However that turned out to only be the tip of the iceberg.

First, even though I was opening a business account, they wanted my personal SSN. To do a credit check. Sorry, that's not okay. I told them I wanted to use my EIN and not my SSN for tax purposes. After the customer rep spoke to someone else he came back and said okay, but instead they would need additional documentation. That additional documentation turned out to be my profit and loss statements and tax returns for the past two years (or how long the business had been operational whichever is shorter). Sorry, that's even more not okay. I am not handing over my private company's P&L statements or tax returns to a merchant processing company.

I should close by saying I am so far very happy with SquareUp and it was extremely easy to sign up with them. I was able to do it from their web site, and I did not have to provide sensitive personal or business financial information to do so. And I have never had to provide that sort of information to any of the other merchant processing companies I have used in the past or for Google Checkout, PayPal, or Amazon Payments.

Posted by josuah at 2:55 AM UTC+00:00 | Comments (0) | TrackBack

June 28, 2010

Asian Tour

Cable Car FogI recently went on something of a whirlwind business trip through three countries as part of a project we've been working on at Netflix for a short time now. My trip started off in Hong Kong, then Shenzhen, China, followed by Seoul, Korea and finally Osaka and Tokyo in Japan. It had been almost ten years since I was last in Hong Kong, and it was my first time visiting Japan. I was in Korea last year for business but not in Seoul that time.

Things were pretty hectic in the beginning. We had one day in Hong Kong to acclimate to the time change, but Shenzhen and Seoul were completely filled each day with meetings and travel so there wasn't any free time at all. Mitch and I extended our stay in Tokyo, Japan a little extra though, so we could do some things that we wanted to. I was especially excited about Tokyo because I've wanted to visit Akihabara and Shibuya for a very long time.

In Hong Kong, we went to Lantau Island via the Ngong Ping Cable Car to see the Tian Tan Buddha tourist attraction. I say tourist attraction because when I was there ten years ago the site wasn't so commercialized. The clouds were very low that day, which meant our cable car went right through some dense fog, and walking around at the peak meant walking around through clouds.

Interactive Subway MapCrossing from Hong Kong into Shenzhen meant going through the China border inspections. It wasn't a big deal, but it is like crossing between countries. (Returning into Hong Kong took much longer.) Shenzhen is pretty much what I expected with small towns, usually containing an obvious main street, based around industrial areas. The factories are what brings workers into Shenzhen and keeps money flowing into that area.

Both Hong Kong and Shenzhen were very hot and humid. My body is not at all accustomed to that sort of environment so I was constantly sweating. I think one day the humidity was listed as 90%, and the temperature was always above 30°C.

After China we flew into Seoul, Korea. I like visiting Korea because I have a friend there that works at Samsung. His English is quite good and we get along well. It happened to be his daughter's 100-day celebration when we were there, and he gave me a cute little rice cake treat. I was also hoping to meet up with someone in Seoul whom I just recently met at Can Jam 2010 when I was exhibiting, but a schedule conflict prevented us from doing so.

Interactive Mall MapOne thing that I really liked in Seoul were the interactive maps. Both the subway and shopping mall had an interactive map. Using the touchscreen, you could select where you wanted to go, or search for where you wanted to go, and it would provide detailed animated directions on the map itself for how to get there. This is so much better than the static maps used here in the United States. Although I suspect there would be some hesitation of installing expensive maps in U.S. subway systems out of fear of graffiti or vandalism. People, and police officers, appear to be so much nicer, polite, and courteous in Korea than in the U.S. (Obviously this is even more true in Japan, where manners are extremely important.)

Osaka Police BoxAfter Korea, we flew into Osaka, Japan for our last business engagement. This is where it first hit me how expensive things are in Japan. I'd heard and read about things being expensive there, but a fruit plate in the hotel restaurant was more than USD $40, and I found out the waitresses at that restaurant were probably only making about USD $10/hr. I thought at least food should be about the same price as in big U.S. cities if the pay scale is about the same, but since it is more expensive and going out to dinners and drinks are such a big part of Japanese culture people must spend a significant portion of their income on food. The pre-packaged meals at 7-11 are priced around what I usually spend if I'm eating out to lunch at home.

Kitties for Sale in OsakaAlso really expensive are pets. We stopped in a pet store in Osaka, and kittens and puppies are regularly priced over USD $1000 and often close to USD $1500. Some of them were even around USD $3000-$4000. The pet stores were pretty small, and probably had about a dozen or so of kittens and puppies. There was one store that also had some monkeys. No prices were listed on the monkeys; I imagine they might be considered a luxury where if you have to ask, you can't afford it. A Puppy for Sale in Osaka One thing I noticed though was that all the kittens and puppies were very young. It's a lot easier to sell cute kittens and puppies, and I saw a bunch of girls watching and saying kawaii a lot, but it also makes me wonder what happens to the ones not adopted. If they only keep young ones in the store, the others might be discarded. T_T

After Osaka we went to Tokyo. For a few hours one day Mitch and I took the train to Hakone and went to the Kappa Tengoku onsen. It took about two hours each way by train, and we spent about two hours at the onsen itself. The soaking pool water was very hot. So hot that I immediately started sweating like crazy and my body began tingling all over. I had to get out and shower in cold water once, and also sit mostly out of the pool, in order to cool down. I also got over a dozen bug bites right away. Most of them got bigger and only just started disappearing a couple days ago.

Akihabara MaidsBut by far I spent the most time in Akihabara and Shibuya. Akihabara was very exciting for me because of all the shops and the culture. Maid cafés have gotten very popular and there were dozens of maids on the streets handing out flyers and trying to convince customers to enter their shops. We didn't end up going into a maid café though. Which was fine by me since I was spending all my time shopping anyway. Although I would have liked to go to one. As well as check out some of the other crazy theme restaurants; I'm not sure where they are though since they're not in Akihabara. I didn't get a chance to check out a love hotel or capsule hotel either.

White Album PosterThere are a bunch of otaku-stores in Akihabara, unsurprisingly. The stores tend to be thin and tall. Only the stores that sell electronics or are like department stores have enough floor space that things don't seem cramped. There was tons of manga, anime, movies and TV shows, figures, video games, and pink stuff. Although when it came to figures and trinkets only the most recent stuff was getting shelf space. I can't read Japanese so manga and anime was pretty much out. Plus, music and videos are super expensive over there. A new release movie on DVD or Blu-ray might be over USD $50. PC and console games are only slightly more expensive than in the U.S. And there is a ton more selection. I picked up a few video games that are only available in Japan including Atelier Rorona, Record of Agarest War, and Agarest Senki Zero; I need to learn how to read Japanese before I can play them though. I would have also gotten Atelier Totori but it was releasing a couple of days after our return flight. Angel Beats Poster I only picked up a couple of music CDs, because at those prices I couldn't just grab stuff that might be good. I did find a Final Fantasy XIII collectors music set though which I immediately purchased. (Have yet to buy the game though.) Mostly I bought figures to add to my collection: I got some Mari Makinami figures from the new Evangelion 2.0 rebuild; Nagi and Tsugumi from Crazy Shrine Maidens; Ein from Phantom, a couple of Vocaloid Hatsune Miku wind-up music toys; a distorted Rei; and Chocobo and Moogle plushies.

The other thing I spent a lot of money on is clothing. I really like Japanese casual street fashion. The sort of interesting stuff you can't find in the U.S. and gets featured in some video games. Most recently in The World Ends With You, a Nintendo DS game that deals heavily with fashion and takes place in Shibuya, although the store names were changed. (The game itself gets a bit repetitive and collecting all the items would take several play-throughs.) To find the better stuff, I ended up shopping mostly at Jeans Mate in Akihabara and Parco in Shibuya. Individual stores in Parco are relatively small and devoted to a single brand, the clothing selection is limited, and there is usually only a handful of specific styles per brand. Prices at Jeans Mate and some of the stores at Parco tended to start at around USD $30 for a T-shirt. But some of the really high-end stores in Parco sold a single T-shirt for USD $300. Some of the stores had more complex clothing, like jackets, that sold for USD $1000. This despite being something that could be made for a few dollars in material and labor. I limited myself to things that were priced at the lower end, but even then I think I spent more on clothing this one time than I've spent on clothing my entire life so far.

WiMAX KittiesThere were two things that made it more difficult to buy clothing in Japan. First was the extreme leaning towards girls' clothing. There are entire mall buildings that only contain girl clothes. I would say only about 10% of the stores sold boys' clothing. The two types of stores were also physically segregated in many cases. Only the larger non-boutique stores carried both male and female clothing.

Secondly, the clothes in Japan aren't sized for me. I had to purchase size XL / LL or size 4 (for shirts) and even then it is a tight fit. My shoes are 2cm larger than the largest they stock in shoes and socks. On many occasions I simply couldn't buy the clothes because they didn't sell it in my size. I guess there are a couple of stores that do sell larger clothing, but you have to go find them specially.

I think I could have spent a whole lot more money in Tokyo, both on toys and clothes. And there are still a lot of other things to do and see just in Tokyo itself, never mind the rest of Japan. I'm not much into sight-seeing, but I can imagine myself spending weeks more exploring just Tokyo.

Posted by josuah at 3:37 AM UTC+00:00 | Comments (0) | TrackBack

February 11, 2010

Internet Explorer 8 Script Load Order

I discovered some strange IE8 behavior when loading external scripts. Typically, I list a sequence of scripts in their dependency order. For example, if b.js depends upon a.js, I would list them as follows:

<script src="a.js" type="text/javascript" language="JavaScript" defer="defer"></script>
<script src="b.js" type="text/javascript" language="JavaScript" defer="defer"></script>

In Firefox, Safari, and IE7 this works just fine. However in IE8, it seems as though b.js gets loaded before a.js. This left me in a bit of a pickle, because I'm not sure listing b.js before a.js wouldn't break non-IE8 browsers. The solution I finally decided upon was to merge a.js and b.js into a single JavaScript file. Not the ideal solution, but it works.

Posted by josuah at 2:28 AM UTC+00:00 | Comments (0) | TrackBack

February 7, 2010

HTML5 is a Contradiction

I attended a presentation by Molly.com recently where she gave an overview of what HTML5 is trying to do and some of what it will bring to the "browser as a platform" over the next decade. A lot of what was presented is a welcome improvement over the current situation faced by web developers. But I did have two criticisms, which I voiced and which HTML5 does not currently address.

The decision to no longer version the state of the technology is a mistake. As I understood from the talk, one goal of HTML5 is to make it so it is backwards compatible with everything currently out there and to carry that backwards compatibility forward into future versions. The former is an admirable goal, but the latter is a misguided attempt at making HTML5 future compatible and technology doesn't work that way.

I brought up the example of the required attribute on form elements. If the required attribute was not part of HTML5 right now, and was later added, then making use of the tag in future web applications would mean your application is broken in older HTML5 implementations. Since there is no version, you cannot state that your application requires a browser to support a specific version and you are left with the sorry state of having to ask your runtime environment if a feature exists or coding for the least common denominator. I argue that the latter is going to maximize compatibility with minimal headache.

Can you imagine if Java or .NET did things this way? Your current Java 1.6 source code would be limited to API provided in Java 1.0 and everything else would be bundled libraries or your own code if you didn't want to maintain crazy special-case code for every new Java API that's come out in the past 10 years.

And this is exactly the problem Android is facing. iApp developers can write something once and know that it will run fine on all iPods, iPhones, and now iPads. This dramatically decreases code complexity and allows developers to focus on the application rather than the platform. Android developers are not able to simply require an OS version. Different phones support a different set of APIs and application developers have to insert special-case code for different hardware. It's a nightmare.

Of course this is the way things are in the world right now. Which web developers are used to and maybe don't think is a big deal. But if you're going to spend the time to change things for the better, you have to do it right.

This leads to my second criticism. Due to the annoying nature of web development right now, where a site like QuirksMode is necessary and you have to test every change in a dozen browser/platform combinations, HTML5 is admirably stepping up and requiring all browsers to implement features exactly the same. The specification is very detailed and the goal is not to leave anything up to chance. Cover all the cases.

But some of the features covered by HTML5 are currently, or may be in the future, features that are provided at a level lower than the application. Mac OS X's widget implementations is a great example of this. You no longer code your own text input widgets, or your own spell check. But HTML5 includes a spell check attribute which, if you want to avoid ambiguity, would require the same implementation in all browsers. As a result, my experience using a web browser on Mac OS X may be wrong. Wrong in the sense that it does not provide the same experience consistently provided by all other user interfaces on my platform.

So you see, HTML5 is a contradiction because it says everything must be strictly defined so all web applications look and behave the same in all browsers and on all platforms while at the same time saying the feature set is undefined. The former because the engineers are tired of the horrible state of web development which cannot scale and is a tremendous drain, and the later because the thinkers don't want to limit themselves or have to come back to the 10-year drawing board again in two years when a new feature is added to deal with advances in technology.

You can't have your cake and eat it too.

Posted by josuah at 10:34 AM UTC+00:00 | Comments (0) | TrackBack

June 30, 2008

LAFF 2008 + Electric Daisy

This year Netflix's film festival-related party was in Los Angeles as part of the LA Film Festival. Luna and I drove down and stayed in a hotel near the beach. We spent some time exploring there, and ate at a theme seafood restaurant (not super great). But probably the most memorable part of exploring was this pet store we found near the hotel that had some kittens and cats for adoption. Of course Luna wanted to bring them home, but we can't take care of any more cats than we already have.

Netflix's party was co-hosted by FOX again, in some expensive house up in the hills. I guess someone actually lives there, but it was available for rent. It has a really great view of Los Angeles, and there was a swimming pool and it was fairly large in comparison to the types of houses that you might find in the area. Luna mostly ate some food, and met Reed for the first time. I danced a little bit but not much. We didn't stay too late.

Since we had gone down to Los Angeles before, this time we went to Universal Studios instead of Disneyland. The park was much more movie-oriented of course, and more shows than rides. So I didn't find it as much fun but there were certainly a lot of interesting things to see. We did the ride that goes through the park and stage sets. There was a Mummy ride in promotion with the new Mummy movie. We had a good time for the most part.

At night I went with Greg Orzell to Electric Daisy. That was definitely the most exciting part of my trip. Luna isn't into that sort of music or dancing so she didn't go. It took us a long time to get inside, but it was really great. Tons of people, but not too many so you didn't have room to dance since it was outdoors at a stadium. Although it was too many if you wanted to try and get in and out of the stadium. I wasn't really dressed the part. I should have worn shorts and a T-shirt instead of slacks and a clubbing shirt. A lot of people were wearing a lot less clothing.

The best set was definitely by BT. His music is upbeat enough to keep the body moving, but intricate and beautiful at the same time instead of just being a bunch of drum 'n bass, jungle, or house. Paul Oakenfold was also there, but I thought his set was just okay. I also remember Paul Van Dyk's set, because he was last and probably the most heavily promoted of the artists. He included a strong laser light show, and it was probably good to place him last because his music is more trance and ambient so it slowed things down a bit. But that also meant it wasn't really the most exciting music to listen to in this party environment.

There was one scary incident during the carnival, when a girl collapsed. I ran to find the local paramedics, but by the time I actually found them someone had already called it in. I'm not sure what ended up happening to her, but I think she was okay when they found her.

Overall a really fun time. I danced pretty much non-stop for around four hours. Massive leg cramps but I danced through those too. :)

Posted by josuah at 6:56 AM UTC+00:00 | Comments (0) | TrackBack

August 24, 2007

Netflix Blogs

It seems like Netflix is jumping onto the blog bandwagon. There are employees who have personal blogs, such as myself and Michael Rubin. Personal blogs, of course, do not represent Netflix even if the company may be mentioned in it. But there is now an official Netflix Community Blog, which is moderated by Rubin, that was created as a way of communicating with customers and the rest of the world in general. Earlier this week, Reed Hastings, CEO of Netflix, started a personal blog as well; but he's quite aware that his blog reflects upon his position as CEO.

Posted by josuah at 7:15 PM UTC+00:00 | Comments (0) | TrackBack

October 31, 2006

I'm On Hacking Netflix

So anyone who delves into Netflix has probably heard about Hacking Netflix. It's not affiliated with the company though; it's run by an individual whom I don't know anything about. My previous reply to Netflix Fan is now featured on Hacking Netflix, although no one seems to care enough to post any comments. :P

I find it a bit amusing that Becky first wrote a blog post about blogging, and then I wrote a blog post about her writing a blog post about blogging, and now Hacking Netflix has a blog post about my blog post about her writing a blog post about blogging. And now I'm writing a blog post about a blog post about my blog post about her writing a blog post about blogging.

I stopped by Netflix Fan again today, and noticed Becky put up really big disclaimers saying none of the links she listed are corporate blogs. Maybe I made her feel a little guilty or something since I kept mentioning that in my reply post, but I didn't mean to. I think anyone would be extremely hard pressed to consider my blog as anything other than a personal site.

Hm. It seems that the TypePad developers don't think URL's should begin with https:// either. So no TrackBack ping for Hacking Netflix either. I should probably check my own ping URL regexp.

Posted by josuah at 5:45 AM UTC+00:00 | Comments (0) | TrackBack

October 25, 2006

A Netflix Blog?

Gary showed me today a post on Netflix Fan asking Why no corporate blog for Netflix? and, as it happens, including a link to my Work & Research blog category. I'm not particularly surprised, as I've received a few emails at my work address from Netflix customers asking various questions. The poster at Netflix Fan, Becky, asks if there is a company policy regarding our blogs.

Well, for starters, this is my personal blog, where I happen to have a work category. So anything I post on here is not something said on behalf of Netflix. My blog also includes posts from when I worked at IBM and also lots of entries concerning research I conducted while attending UNC Chapel Hill. If I worked for company XYZ, you'd also see posts on there that are my personal statements and opinions (e.g. I had fun at the company celebration) and don't in any way represent the company in an official capacity.

With that in mind....

When you ask about official policy, I don't think it's any secret that Netflix is a company that heavily trusts its employees.

At IBM, there were corporate policies that I didn't like because I felt they had a negative impact on the culture. I won't go into details because I don't know if those policies are considered secret or not, although I don't think they are. Other people I talked to about those policies found them perfectly reasonable, especially given the large size of the company where you don't know everyone else and where there are a fair number of contractors moving in and out.

In comparison, Netflix is a small company. A few hundred employees, and each of us can pretty much recognize by sight the other people that work at headquarters (maybe not by name). It's a company that emphasizes personal judgement and mutual trust. You are trusted to exercise good judgement in what you do. So I can have this blog, and it's my responsibility not to reveal secrets or information that might be harmful to the company (assuming it is ethical to withhold that information), and to make it clear that this is my personal opinion and not a company statement.

What I think is really important is that I can trust everyone else at Netflix. I'm not worrying about leaving my iPod out on my desk while I attend meetings, or that someone might try to get ahead at my expense. And it's equally important that this trust extends between all employees, from the CEO to the code monkey (me).

On a technical note, it appears the trackback server used by Netflix Fan doesn't support pings from sites that begin with https:// instead of http://. I couldn't find any contact info for Becky, and didn't want to sign up for an account, and their comment system didn't like my Google account even though it claims to support it. So maybe someone who reads this can let them know all those things aren't working. :P

Posted by josuah at 3:15 AM UTC+00:00 | Comments (2) | TrackBack

June 10, 2006

Netflix 5M Celebration

Today Netflix had a company party to celebrate reaching 5 million subscribers. Shuttle service was provided to Nestldown, a private park-like area in the Santa Cruz mountains. The theme of the party was a carnival, so there were some carnies, carnival games, and food like hot dogs and cotton candy.

I learned how to play Bocce Ball from William, and played against him, his wife, Donna, Tod, and one of the event people (who was also a Netflix subscriber, going through the top 250 movies off IMDB). I did okay, especially at the beginning, but later on I wasn't thinking about it as much so didn't do as well.

Afterwards, Tod and I went down to the pond and listened to some live music from a quartet playing a violin, two guitars, and an acordian. All of the instruments were amped though. I saw Lin's daughter Alexandria there so went up to talk to them. Then Alex wanted me to go with her into the forest so I went up. We walked around in there for a while and we were going to go back but then she changed her mind and started hiking some more. She ended up seeing the train and running after it, and I couldn't get her to stop so we could tell Lin we were going after the train.

We followed the train for a while then stopped in the games field to play some volleyball and frisbee and use the swings. Then we'd been gone for a while so I told her we had to go back, and followed the train tracks back to the pond where she met back up with her dad. Turns out Lin didn't realize we had run off and so went looking for us back at the cabin area.

Afterwards, I met up with Samir and Jamie and we rode the shuttle back down to Netflix HQ and then I drove home.

Posted by josuah at 3:53 AM UTC+00:00 | Comments (0) | TrackBack

February 1, 2006

Sundance Film Festival 06

I got back last night from spending the weekend in Park City, Utah at the end of the Sundance Film Festival 2006. Each year, Netflix gives its employees some money to subsidize a trip to Sundance. I left Friday morning and got back late last night.

On its own, Park City is a sleepy ski resort town. For one week each year, there is an extra inrush of people and things become very crowded and busy. And the atmosphere changes a whole lot too, I imagine, with celebrities and sponsor events drawing a unique type of person. I felt like there were a lot of wannabes and phonies. At least Park City makes a bunch of money with their super-inflated prices for the week.

There is really only two things to do while you're there. You can either watch lots of films (which requires either a significant financial commitment, both for lodging and tickets, or a lot of patience to stand in line for extra seats) or go skiing (which also requires a large financial commitment since lift tickets are a bit expensive). I didn't really want to spend a lot of money, so I didn't do much of either. And as a result, I was kind of bored most of the time.

I didn't see any celebrities, but FOX hosted a party for Netflix that I went to on Saturday. Unfortunately, it was quite loud and my ears were ringing a lot afterwards. That's not good. The party was also a little boring because I didn't know many people, and since it was so loud my throat was strained whenever I spoke. I also didn't much like the music. It was standard fare, but nothing I really like to listen to.

The restaurants are supposedly also not that great, and a bit expensive. Samir and Jamie spent $50 one night on dinner, and said last year they didn't find any good restaurants. I ended up buying groceries and cooking in the hotel room. I also ended up spending time in the hotel room watching stuff on TV and also the first three discs of Samurai Champloo.

The two screenings that I did go to watch were the animation spotlight and the documentary award winner. The first featured about ten short animation films. Most of them were horrible. One guy spent six years camped outside a studio with his wife to make a two minute animated poem. And it wasn't very interesting. I did like, however, the following: Jasper Morello, Gopher Broke, Fumi and the Bad Luck Foot, and Los ABC's ¡Que Vivan Los Muertos!.

The documentary winner was God Grew Tired of Us. Apparently, a number of documentaries have been made about the Lost Boys of Sudan. Maybe they keep making documentaries until someone will pick one up for mass distribution. I remember watching some stuff about them on the news before, so about half of its content wasn't new to me. This latest one was clearly directed by a non-Sudanese person, because a large part of it contains ethnocentric humor.

Posted by josuah at 6:11 AM UTC+00:00 | Comments (0) | TrackBack

January 14, 2006

Netflix Swag (Sundance 2006)

Netflix gave us our preparation kits for this year's Sundance Film Festival, including some show tickets and maps and Park City informational pamphlets. But they also gave us a cool OGIO backpack (Street Sector-Z, with Netflix embroidered on it), a hat and scarf with the Netflix logo, and also a Netflix fleece pullover.

Today was also an open house where friends and family of Netflix employees were welcome to check out the new building we moved to. A lot of family members and children showed up, but not as many as I think they planned for. There is no doubt a lot of leftover champagne and apple cider and sweets.

Samir and I checked out the new theater, because I wanted to see where the speakers would be. The speakers are located in the walls, or within alcoves in the walls, and hidden by the fabric. The projector is above the ceiling acoustic treatment panels. There's a lot of equipment in the rack, from Crestron and other manufacturers, including gear from Lexicon: MC-12, LX-7 and LX-5, and RT-20.

Posted by josuah at 6:44 AM UTC+00:00 | Comments (1) | TrackBack

November 13, 2005

My Foot Hurts

I came over last night to visit Shannon and Yvonne, and we went to a plaza near New Park Mall to spend some time looking around a Chinese store. There were a few things I liked, but they were all more expensive than I wanted to pay. We watched the rest of Azumanga Daioh. Shannon thought the ending wasn't good enough; Yvonne thought it wasn't angsty enough. Then we watched Chungking Express. Sometime during the movie, my foot started hurting. It was late and my foot hurt so I slept over. Now my foot feels swollen and hurts a lot.

So Mei-Ling called someone she knows that is supposed to be able to fix these sorts of injuries. Whatever it is, since it just started hurting while I was sitting there. We're going to that guy soon. He is like her grandma's grand-nephew or something. If it doesn't get fixed, then I'll have to stop at Netflix on my way home to get my laptop and probably have to work from home tomorrow.

Posted by josuah at 9:44 PM UTC+00:00 | Comments (0) | TrackBack

October 29, 2005

iCal Support for PHP iCalendar

It has been an extremely long time since I did any work for PHP iCalendar. During that time, the web site had been cracked through a PHP exploit and was down for quite a while. Apple's iCal application also underwent a calendar repository redesign, causing an incompatibility between PHP iCalendar and the native repository. Anyway, long story short is I've got changes pending to support the new iCal repository structure, as well as a couple of bugs. I'll check them into CVS once they've been peer-reviewed.

Posted by josuah at 8:23 PM UTC+00:00 | Comments (0) | TrackBack

October 27, 2005

Does Visual Studio Rot the Mind?

I discovered a very interesting publication by Charles Petzold entitled Does Visual Studio Rot the Mind? Petzold is a Microsoft-oriented software developer, and this publication was a talk he delivered to the NYC .NET Developers Group on October 20. I think it is pretty insightful reading that all software developers should read. The only error is that I believe many of the features discussed by Petzold were first introduced by IDEs other than Visual Studio.

Posted by josuah at 4:27 PM UTC+00:00 | Comments (0) | TrackBack

October 22, 2005

My First Company Meeting

Seems that at Netflix a company meeting is held each quarter in the Los Gatos Theatre. And the tradition is for all employees that were hired that quarter to dress up for some sort of movie-themed entertainment. Previous themes have been Monty Python and the Holy Grail and Kill Bill. This quarter's theme was The Little Mermaid. I was a bluefish. There were also starfish, clams, and clown fish. I get to keep the costume. The rest of it was free pizza and soda, business information, and a funny internal movie titled Gemini.

Posted by josuah at 7:02 AM UTC+00:00 | Comments (0) | TrackBack

October 10, 2005

Potential ID Theft via Blockbuster

Since I work at Netflix, this little tidbit was of some interest today: Blockbuster paperwork left on sidewalk. Apparently, a closing Blockbuster store in New York trashed their customer applications without doing anything to ensure confidential information like social security numbers and credit card numbers were destroyed. Some employee was obviously quite ignorant of the potential consequences.

Posted by josuah at 11:47 PM UTC+00:00 | Comments (0) | TrackBack

September 27, 2005

Agile Software Development with Scrum

My team at Netflix uses a development process that would smack of rebellion at some corporations. The process is called Scrum and I just finished reading the introductory book one of my teammates gave me when I got here: Agile Software Development with Scrum.

I found there is a lot about the Scrum process that I can identify with, and I think that makes it very easy for me to believe that Scrum is an excellent approach towards software development. I do think anyone who is concerned with or has a professional interest in software development processes should look into Scrum. There are several books available on the subject. The challenge is to get management acceptance of this process, as it will get rid of unnecessary people, and many corporations are top-heavy.

Posted by josuah at 7:47 PM UTC+00:00 | Comments (0) | TrackBack

September 17, 2005

To Sir, With Love

A few people came over tonight to watch To Sir, With Love. Alla and George showed up pretty late; Jeannie showed up on time but had to take a break to drive Leslie to the airport. It took her longer than she had thought to come back because there was traffic at SJO. Scott showed up around 10pm and Ellen cancelled. We had Chinese food from Golden House Chinese.

We talked a little bit about how things are going at IBM. Apparently Scott ended up in charge of my code after leaving, although Kavita had been the person I transitioned it to. And there were a few defects like NullPointerExceptions and a class cast exception. At least nothing major unless he didn't want to tell me my code sucks.

The movie started off kind of slow, and sort of boring. But it got better about half-way through once Sidney Poitier figured out how to react to the students. Then things got interesting, and followed a course similar to Dangerous Minds, a more recent film built around the same theme.

Alla fell asleep during the movie. Which doesn't surprise me since I'm sure she found it pretty boring and was a little tired. Scott kept making fun of Jeannie during the movie. Afterwards, Scott told us some stories about funny but stupid/cruel things he's done in the past. You'll have to ask him about that.

Posted by josuah at 8:14 AM UTC+00:00 | Comments (1) | TrackBack

800 Pound Gorilla Has No Coordination

I ran across an article today that I think is very important and very interesting. The information presented within is very close to my sentiments towards IBM but the company in question is instead Microsoft. This article was published in reaction to some of the information that has been revealed in the recent Kai-Fu Lee case involving Google. I think everyone should take a look at this to understand how things can get bad at a large company.

Posted by josuah at 1:00 AM UTC+00:00 | Comments (2) | TrackBack

September 6, 2005

Netflix BBQ

Spencer, one of the people on my team at Netflix, hosted a Labor Day barbecue at his house today. I went with Alla after we chaperoned her youth group's fund-raising car wash. Christian showed up with his wife, second daughter, and son. Marc showed up with one of his friends from LA. Michael came but his wife was out of town visiting family. And Samir came with his wife. We basically hung out talking and had some good food.

Posted by josuah at 5:02 AM UTC+00:00 | Comments (0) | TrackBack

August 20, 2005

IBM Farewell Movie Night

I had a farewell movie night yesterday, in recognition of me leaving IBM to join Netflix. Quite a few people showed up, including Jean, who hasn't shown up for anything in a very long time. It was the first time that Cindy, Chris, Stef, and Jeannie had been to my place too. Most of the time was spent making fun of people.

The movie we watched was the 1967 Wait Until Dark, starring Audrey Hepburn. Alan Arkin played the bad guy, and although the voice is the same, he looked very different back then. I thought it was a pretty good movie, although some of the "younger" crowd thought it was really corny at times. Mostly because of the gender-specific roles that were portrayed, consistent with a 1967 movie. There was one point where a bunch of people screamed because of a scary shock.

There was also a ton of food. People brought too much food. Or not enough people ate enough food. I've got a lot of leftovers, but at least some people took food home. I have enough food to last me a while.

Posted by josuah at 6:46 PM UTC+00:00 | Comments (0) | TrackBack

August 19, 2005

Going to Netflix

It's official. I have accepted a software development position at Netflix, working on the "digital distribution" agreement announced late last year with TiVo. In other words, downloading movies instead of receiving them in your mailbox. This will be really cool and fun stuff to work on, especially since my research background in school was in multimedia networking. Plus, a lot of the things I don't like about working at IBM shouldn't be an issue at Netflix. My first day will be August 29.

Posted by josuah at 4:05 AM UTC+00:00 | Comments (4) | TrackBack

July 11, 2005

Random Weekend

Yesterday, I was woken up by my manager because there was a problem and they needed someone to investigate and come up with a fix. So I spent a few hours on Saturday working on that. Then I picked up Yvonne from her volunteer work and took her to Ohlone College to watch Vivian play tennis. Shannon and I had some Subway for dinner, but then Yvonne and Shannon got hungry again later. We went to McDonald's to get a happy meal, so Shannon could get a Neopets plushie, then ended up at Denny's for food.

We got back around 11pm. Mei-Ling had tried to call us earlier but my cell phone ran out of batteries. So she had not taken a house key with her, and Shannon locked the garage door. So she was stuck in the garage by herself without any way to get home until we got back from eating.

We ended up watching the first part of some sort of golfing movie, then we watched Johnny English which was very funny. A lot funnier than the previous Bean movie which wasn't good at all. After that, we watched some movie that had an English title translation of I Not Stupid. This is a comedy that is also a social satire about Singapore. There are some pretty funny parts, but also some really emotional parts. And the whole thing is filled with anecdotes and personalities that a lot of Chinese people can relate with. A good movie.

This afternoon (I woke up at 1pm) we all went up to some sort of regional park. In the Sunol area. Shannon and I splashed around in the creek, while Yvonne reverted into her annoy-others-to-retaliate and obnoxious mode for having been dragged out there. She kept carrying around her book even though if she had dropped it, it would have gotten ruined and then she'd have been mad at everyone else even though it would have been her fault. We tried finding a place called Little Yosemite but ended up going the wrong way twice. And then we drove back.

Posted by josuah at 1:28 AM UTC+00:00 | Comments (0) | TrackBack

April 30, 2005

Visiting UNC

I finished working with the IBM Director people this morning, so after doing some work I left the IBM RTP site and drove over to Chapel Hill to visit some of my former professors: Ketan, Kevin, Jan, and Sanjoy. I also got a chance to visit with Josh, who was my carpool buddy during Extreme Blue. We did some catching up and talked about what's going on. Later on, I went over to the UNC Student Store and picked up a copy of Mac OS X Tiger.

John Siracusa over at Ars Technica has been providing in-depth technical reviews on Mac OS X for years now. He put together a really great review of Mac OS 10.4 along with his usual rants about what could be better. The review does make it quite clear that picking up Tiger is worth it, and that he believes Apple is making some progress in redefining (or implementing, depending on how you look at it) the modern OS for the general public. Apple is starting to bring back some of the things that made the original Mac OS so much more powerful and useful than the alternatives.

Posted by josuah at 1:11 AM UTC+00:00 | Comments (0) | TrackBack

April 29, 2005

Back in North Carolina

I got back from Las Vegas Tuesday night, and Wednesday morning I had to fly out to North Carolina to meet with some IBM Director developers. So I was able to meet up with some of the people I haven't seen in a couple of years, like Peter, Shari, Marcel, John, and Kevin. Keri wasn't at the Extreme Blue lab today though.

Peter and I had lunch at a local BBQ place, where they serve "real" BBQ: shredded pork with vinegar BBQ sauce. Peter had shreded pork, but I had chicken. Carolina BBQ is an aquired taste. I had Papa John's pizza for dinner; I haven't eaten Papa John's since leaving here because there isn't one near my in San Jose. Their pizza is very good for a chain restaurant.

Posted by josuah at 2:44 AM UTC+00:00 | Comments (0) | TrackBack

April 17, 2005

Ford Motor Company

IBM sent me to Ford Motor Company for all of last week to help close a business deal. I didn't want to fly over to Dearborn, MI for a week, but the deal is pretty important. Especially given the sharp plunge IBM stock took over recent earnings reports. Most of my time there was spent working on-site to resolve issues and address Ford's solution requirements. But I did get a few hours to do my own thing.

I arrived in Dearborn, MI at 3am Monday morning. My flight was delayed in Phoenix because the airplane wasn't taking on water like it was supposed to. The town of Dearborn is a Ford-town. It's relatively rural except for the money that Ford has poured into the economy. There are a number of expensive hotels and restaurants that no doubt survive off business travel. Other than that, Dearborn seems like your typical small town. I'll go through the food situation all at once, since spending money on food was about all I did outside of working in the Ford building.

Lunches I ate at the Ford Credit building cafeteria. It's a pretty decent cafeteria, but the salad bar is expensive. You could get a good two-item meal for around $4, but if you wanted fruit, salad, or vegetables it was $6 a pound. So eating right meant I had to spend quite a bit on lunch.

My Sunday night dinner was at the Phoenix airport. On Monday I went to Red Robin with Bill Leonard. Bill works in IBM Global Services and was in the TotalStorage Productivity Center class I was part of a couple of weeks ago. They have really good burgers at Red Robin.

Tuesday we went to Kiernan's Steak House. It is a really nice bar & grill restaurant, but also pretty expensive. Frank Chodacki was also there on Tuesday. He works Level 3 support for TPC for Data and also knows Microsoft Active Directory. Frank told us his stories about destroying hardware, both personally and professionally as an employee of Trellisoft before it was purchased by IBM.

Wednesday Bill and I went to Benihana. I don't think Benihana is such a great Japanese restaurant anymore, although their commercials are entertaining. I only saw maybe one Japanese person in the restaurant, and she was the waitress. The chefs were not Japanese. The prices were expensive for what you got, compared to regular Japanese restaurants, and not of the highest quality. I tried to order something at the sushi bar by talking to the chef, but turns out you actually have to fill out an order sheet.

Thursday I wanted to get some time to look around and buy last-minute souvenirs so I didn't go with the rest of the gang to dinner. Instead I ended up buying some food from a local organic grocery store. Wasn't super-cheap since I was buying all prepared food, and it was something of a gourmet store. But the food was very good.

Besides eating, I did stop at a couple of shops to buy souvenirs, but nothing Dearborn-specific. So not really anything to mention about that. I left Friday afternoon and ate my last business-trip dinner at the Phoenix airport again.

Unfortunately, there were mechanical problems with the airplane in Phoenix again, and our flight ended up getting cancelled because it took so long to fix that we could no longer land at San Jose International Airport before it closed at midnight. So I had to wait in a long line to get a hotel voucher and get booked on a Saturday morning flight at 10am. I stayed at a pretty nice place called Prime Hotels & Suites. Only got six hours of sleep though before I had to wake up to get to my flight. I did eat breakfast at their buffet in the morning. Finally I got back home at noon on Saturday.

Anyway, the people in Dearborn, MI were really nice and I enjoyed meeting them, but I really didn't much enjoy the trip as a whole. A lot of time spent working at Ford, and then in my hotel room to write things up for IBM. Work, eat, and sleep was about all I could do, and I wasn't getting enough sleep either. I just made up for a lot of lost sleep last night. I also don't think I'm ever going to fly through Phoenix again.

Posted by josuah at 5:45 PM UTC+00:00 | Comments (2) | TrackBack

April 2, 2005

Business Trip to SF

IBM sent me on a business trip this past week up to San Francisco. IBM held a information class for business partners about TotalStorage Productivity Center. I was sent up last-minute to help out with any technical problems that might occur and to provide answers for any technical questions the business partners might have. IBM is going to reimburse me for the hotel, parking, and food. I went up on Sunday night and got back yesterday evening.

On Monday, I had to buy a belt before the class because I forgot mine and I needed to dress up in for the business partners. Although I don't think they would have cared that much if I was dressed up or not since one of the IBM sales reps was wearing jeans and sneakers. And one of the coordinators was wearing a T-shirt. That night, I went to JapanTown to eat at a restaurant there called On The Bridge. I also stopped by the Kinokuniya bookstore, although I didn't buy anything.

Tuesday night the IBM coordinator took a bunch of us out to eat at Grand Palace Restaurant on IBM's tab. Most of the business partners didn't know what they were going to be eating, as this was a touristy authentic Chinese restaurant. One of the business partners knows a decent amount about Chinese food and ordered a beef tendon clay pot, jellyfish, and also sea cucumbers. I don't like any of those foods myself.

Before dinner, I walked around Chinatown looking for anything to buy. I ended up getting a copy of Boa - Listen to my Heart. I also wanted to get a copy of 2009 Lost Memories but I wasn't sure if it was a pirated version or not. The girl who worked at the store said it was because it was All-Region, but I think all Korean DVDs are All-Region.

Wednesday night I had dim sum with one of the IBM sales representatives. Originally, there were going to be a handful of people, but two of them were too hungry to wait, and the other people we were going to meet up with ended up having dinner plans with other people. We didn't know where to find a place that would serve dim sum for dinner, but after walking around for a while we found a restaurant that did: Four Seasons Restaurant on Grant Avenue.

Thursday night, Szu-Huey drove up. To avoid parking fees, she left her car at a Target in Daly City and I drove out to pick her up. By the time we got back into the city, it was kind of late so we walked around and ate a nice Italian dinner at a restaurant called Mangia Tucci. It was located off the main streets and almost empty for dinner. Apparently they are busy during lunch.

Friday, the class ended around noon. I stuck around a little longer until all the other people left, as one or two of them still had some questions for me. I left for San Jose around 3pm and got back home around 5pm.

Posted by josuah at 7:35 PM UTC+00:00 | Comments (0) | TrackBack

November 4, 2003

Another Security Exception

C# was installed on one of the public workstations in Sitterson yesterday, at which point I first tried to compile my Video Descriptor application. Ran into some trouble with finding references to libraries, but fixed that.

Today, I got the application to compile and create a .exe. However, I seem to be unable to run/debug VideoDescriptor.exe from within Visual Studio .NET; it complains that I am not an administrator or in the debug group. What kind of stupid development requirement is that? I don't have to be root or be in some special group to write programs on any other platform.

So, I tried executing VideoDescriptor.exe from the command line. In this case, I ran into a new security exception different from the supposed unsafe code one I was running into with DotGNU. This one is:

Unhandled Exception: System.Security.SecurityException: Request for the permission of type System.Security.Permissions.FileIOPermission, mscorlib, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 failed.

Is this because the files are over an AFS share? Visual Studio .NET seemed to imply that earlier. Unfortunately I can't write anything onto the local drive...

Posted by josuah at 10:22 PM UTC+00:00 | Comments (2) | TrackBack

October 30, 2003

Visual Studio .NET Installation

Hi contacted the Computer Science help desk yesterday asking about a Visual Studio .NET installation. I can get a copy of the software and install it on my own machine, but only if I reimage my office workstation to Windows. Right now it is Linux because that provides me with remote access.

The other alternative was to use the one department system that was supposed to have Visual Studio .NET installed on it, but I found out that it was not installed after all. The original system at that location was replaced with one of the new small form factor boxes, and did not have Visual Studio .NET installed. Help has filed a trouble-ticket for this and should get it installed soon.

Until I can access Visual Studio .NET, I'm kind of stuck as far as the Adaptable Video work goes.

Posted by josuah at 7:23 PM UTC+00:00 | Comments (0) | TrackBack

October 28, 2003

leppie

I haven't been able to get much of anywhere on the System.Security.VerificationException error I'm running into with the MPEG-2 code. I do know that it has nothing to do with the Adaptable Video code, or the specific MPEG video file I'm using, as it still occurs with SimpleMPEGParser.exe and multiple video files. I also found that it appears to be a result of C# not being able to verify the type returned by MPEG2Event.IntraQuantiserMatrix.getDefault(), based on the code at verify_call.c:1088.

I logged onto #dotgnu on irc.freenode.net and ended up talking to someone named leppie. At his/her request, I sent a copy of the SimpleMPEGParser.exe file and somehow the error is related to a T/D436 struct (???).

Anyway, Ketan is probably right in suggesting I just switch completely over to Visual Studio .NET because that will probably avoid all these problems I'm running into with Portable .NET.

Posted by josuah at 8:42 PM UTC+00:00 | Comments (0) | TrackBack

October 22, 2003

Security Verfication Failure

For some reason, I cannot get the file attributes or length using DotGNU and Portable .NET. So, I am just bypassing that problem for now, which means no progress bar in the AdaptableVideo class.

However, now that I'm working around that, I've run into a different problem with C#'s security model. I suppose it's good for there to be a security model, but I have no idea why it's working the way it is working. Other than external data cannot be trusted from anywhere, given all the stupid kinds of security holes that Microsoft products suffer from. I'm getting this exception when it tries to return a static matrix:

Uncaught exception: System.Security.VerificationException: Could not verify the code
    at MPEG2Event.IntraQuantiserMatrix.getDefault()
    at MPEG2Event.Macroblock.getNext(BitStream, SequenceHeader, SequenceExtension, PictureHeader, PictureCodingExtension, QuantMatrixExtension, DCPredictor, DCPredictor, DCPredictor, Int32, IntPredictor, Int32&) in ./src/Macroblock.cs:170

I'll have to read up more on the unsafe code "feature" of C#.

Posted by josuah at 7:45 PM UTC+00:00 | Comments (0) | TrackBack

Runtime Errors

I've got a call scheduled with Ketan tomorrow morning instead of our usual Thursday afternoon, since Fall Break begins tomorrow at 5pm. I have managed to compile the AdaptableVideo C# code, but am now running into runtime errors. Kind of weird because they are IO errors and I'm not sure what is causing the problem.

Posted by josuah at 3:41 AM UTC+00:00 | Comments (0) | TrackBack

October 17, 2003

New Graphs + More Debugging

Ketan and I had our weekly phone call this Thursday afternoon. He's sent me an updated version of the PVR paper for submission to SPIE I need to redo the graphs so they look nicer, and find an updated reference for the bibliography. Since the submission date is the 27th, I need to get this to Ketan by mid-next week.

Ketan also asked me to start looking at putting together an API to the Adaptable Video code so that other applications can make use of it. For example, an API to perform descriptor comparisons and reconstruct one video from two.

Other than that I've been doing some more debugging on my current Adaptable Video code.

Posted by josuah at 6:05 AM UTC+00:00 | Comments (0) | TrackBack

October 1, 2003

Brief Discussion

Ketan and I had a brief conversation today via phone.

After reading through the current revision of our PVR paper, Ketan thinks we've got enough simulation data and variable coverage. He's going to work out what needs to be done next for SPIE.

As far as my research work goes, I think reachable goals for this semester are frame and coefficient separation and reconstruction, but I'm not confident about completing more than that. Ketan thinks we could write up a hook into some P2P distribution system like Kazaa.

I've also decided that I'm going to try and finish my IP first and then concentrate on the Adaptable Video research. The IP is more important for my graduation, and it also means I can work on one thing at a time, instead of having to multitask so much.

Posted by josuah at 2:21 AM UTC+00:00 | Comments (0) | TrackBack

September 29, 2003

EOF Delegate

I created an EOF class, in the VideoSequence.cs file, that is created whenever the parse_picture() function encounters EOF. When the EOF class is created, any registered delegates are called. This alerts the AdaptableVideo class of the EOF.

I've implemented this way, but if EOF was derived from CodingElement or AtomicCodingElement, then making use of the existing CodingElement.Handler delegate method would make more sense. I guess I could have set the bit address to one more than the last bit, and the number of bits to zero. Ketan hasn't gotten back to me about this, so I'll ask him about it tomorrow during our phone call.

Posted by josuah at 8:54 PM UTC+00:00 | Comments (0) | TrackBack

September 28, 2003

C# AdapatableVideo

I basically finished converting the AdaptableVideo class from Java to C#. Only issue I have left with that is how to handle the EOF. I asked Ketan if he has any opinions on making double-use of the existing CodingElement delegate, or if it would be better to create a separate VideoParser delegate. I'm leaning towards the latter.

Posted by josuah at 10:49 PM UTC+00:00 | Comments (0) | TrackBack

September 25, 2003

VideoDescriptor and Delegates

I worked on the VideoDescriptor class, trying to port to C# and make use of the delegate system instead of registering subscribers. It basically works the same way. I think I may need to add a special EOF implementation to the VideoSequence C# class. I'm not sure if this can be done easily through the existing delegate handlers, or if I need to set up another delegate handler list for that.

The general subscribers concept is more flexible because you can specify multiple communication methods (i.e. multiple function callbacks) while only requiring you to register your subscriber once. I think the problem is that in C# you don't become a delegate of an object, but instead a delegate of a function call.

In Java, subscribers must conform to an interface. There is no language construct explicitly supporting the subscriber-publisher model. C# goes farther to explicitly support this model, but does so from a functional approach. Objective-C has support for both the subscriber-publisher model (through notification in Foundation) and also an object-based delegation model that follows the chain-of-responsibility design pattern.

Posted by josuah at 2:09 AM UTC+00:00 | Comments (0) | TrackBack

September 24, 2003

Remote Ketan

Ketan and I spoke this afternoon about several things. Since the PVR simulation paper we wrote was accepted to SPIE, we need to make improvements and possibly run some additional simulations to get more data. We also talked about the Integrative Paper I am writing as part of the M.S. requirement of the UNC CS Department. Ketan is also interested in setting up a SourceForge server in the DiRT lab so people working on stuff will have a central repository that can be selectively shared.

Of course we also talked about my progress with the Adaptable Video research. The plan is to come up with goals next week when we talk again, in the hope of possibly putting together a paper. I'm not entirely sure how feasible this is, but we'll see.

I read up some on how C# makes use of delegates. It's implementation of delegates is somewhat convoluted simply because the naming scheme and definition syntax doesn't exactly make things match up. Kind of the same way properties are defined like functions but treated as fields, and when being used they look like functions even though they are fields. The C# implementation of delegates is not as elegant as the Objective-C implementation. It does technically follow the delegate design pattern, but coding it does not follow a developer's natural thought processes.

Ketan also heard about the SIGCOMM paper on Low-Rate TCP-Targeted Denial of Service Attacks. The paper describes how to exploit what could be called a design flaw, or also a design win, of TCP so as to cause DoS without using a whole lot of bandwidth. Ketan thinks there might be some possible research applications of this technique, but so far nothing that is actually new or that provides extra knowledge.

Posted by josuah at 2:39 AM UTC+00:00 | Comments (0) | TrackBack

September 22, 2003

Finished C# AdaptableVideoDescriptor Class

Just finished porting (but not compiling or debugging) the C# port of the AdaptableVideoDescriptor class. I streamlined some of the code to make it easier to maintain, and I think fixed some more of those off-by-one type bugs.

Posted by josuah at 7:14 PM UTC+00:00 | Comments (0) | TrackBack

AdaptableVideoDescriptor C# Skeleton

I spent more time porting the AdaptableVideoDescriptor class over to C#. Almost done, although I do have to make sure it compiles. Since I am new to C#, I am not as confident about compile-time success based on my visual inspection, as I am with other languages. I also think I may have found some more bugs in the original implementation. I think I've got about 60%-70% of it ported to C#; the rest is empty function signatures.

Posted by josuah at 12:20 AM UTC+00:00 | Comments (0) | TrackBack

September 18, 2003

Continued AVD Porting

I did some more work porting the AdaptableVideoDescriptor class over to C#. I may or may not have found some bugs in the Java code; it's hard to say since it's been so long since I worked on it, but a first look at the logic would seem to indicate they were bugs. Little things like off-by-one errors.

Associating properties, which are basically getters and setters for fields, with fields in a C# class tends to make things look a little cluttered. This is because the C# coding style is to place those property definitions right beneath the field definition. So it's not very easy to separate fields from properties, and then it only makes sense to place any non-simple getters and setters in the same area of the file, further disorganizing things to the eye.

Posted by josuah at 3:49 AM UTC+00:00 | Comments (0) | TrackBack

September 17, 2003

Little Progress

I've done a little bit more to convert the AdaptableVideoDescriptor class from Java to C#, but not much. Things are very busy for me everyday, and all those lectures don't help. Anyway, there are differences in the C# implementation of the MPEG-2 parsing classes, so that's also something I need to deal with. That and learning C#.

Posted by josuah at 1:17 AM UTC+00:00 | Comments (0) | TrackBack

September 15, 2003

C# in a Nutshell

I started working some more on porting the Adaptable Video code from Java to C#, but it was just too difficult trying to find the information I needed on MSDN. So I went to Barnes & Noble and got a copy of C# in a Nutshell. The copy I got is the 1st Edition, for version 1 of the .NET framework. The 2nd edition came out last month and covers version 1.1 of the framework, but it doesn't make much difference for what I'm doing.

Posted by josuah at 1:13 AM UTC+00:00 | Comments (0) | TrackBack

September 9, 2003

SimpleMPEGParser Compiles

Ketan got back to me and informed me that I should use the BitStream class, and not the PushbackInputBitStream. So I'm using that and now everything compiles successfully. I am able to execute the SimpleMPEGParser.exe program and it outputs the video information.

It still outputs warnings about unnecessary use of the new keyword. However, this class variable modifier is actually necessary to hide the same variable in the superclass, so I think maybe the DotGNU C# compiler is just confused. If I remove the new keywords, then it correctly complains about not hiding the superclass variables. But it doesn't realize this is happening if the keyword is left in. ???

Posted by josuah at 1:36 AM UTC+00:00 | Comments (0) | TrackBack

September 7, 2003

Learning C#

I've started the Adaptable Video port from Java to C#. I found the C# Language Specification which is where I'm learning the language. Microsoft needs to work on their navigation system for the specification; the reader is required to click too many times to progress through the specification without direct links to the previous, next, and enclosing sections.

I am having trouble finding specific things that I'm looking for in the specification because there is no index. You also can't search only the specification because the MSDN search tool applies to the entire library. So although I've read about the readonly and internal keywords, I have no idea how they differ from const and private.

Posted by josuah at 10:30 PM UTC+00:00 | Comments (0) | TrackBack

C# Compile-Time Errors

I started putting together a Makefile for the C# code Ketan sent to me, but I was having trouble getting it to work the way I wanted. So instead, I put together a C# Ant XML file, and that is working quite well. I can see the attraction of Apache Ant more clearly now, as I've never used an Ant-based build system before.

So I tried building the SimpleMPEGParser program included with the MPEG2Event code, but I ran into a few errors that seem to be related to some conflict between the BitStream and PushbackInputBitStream classes. They appear to define the same class in the same namespace. I've emailed Ketan about this problem.

I haven't yet started porting over my Adaptable Video Java classes to C#, but that is the next thing I have to do for this.

Posted by josuah at 8:56 PM UTC+00:00 | Comments (0) | TrackBack

September 5, 2003

MPEG2Event

Ketan just emailed me the C# code for the MPEG-2 parser. So I will start porting over the Java code I wrote for COMP 249.

Posted by josuah at 8:41 PM UTC+00:00 | Comments (0) | TrackBack

September 1, 2003

Hello World!

Well, I got the Hello World example for DotGNU to work. The README in the pnet/samples directory pointed me to the pnetlib/samples directory, where I ran make and then was able to execute programs using ilrun.sh. So hopefully I won't have to boot up into Windows or install and use Visual C# to continue the adaptable video project. Ketan will be sending me his new code sometime this week.

Posted by josuah at 6:12 PM UTC+00:00 | Comments (0) | TrackBack

August 27, 2003

Moving to C#

This semester, my final semester since I'm graduating early, I am going to continue work on the MPEG-2 super-adaptable video descriptor project I started but only got half-done in my Multimedia Networking class last semester.

Ketan has been working on some stuff at Microsoft over the summer and will be working there still this summer. While over there he had reason to use the MPEG-2 parser I used on my project, but he ported it to Visual C#. So I'm going to move over to his new C# library since it's cleaned up and more functional.

The only issue might be how do I actually make use of C#, since I don't really do Windows development. Looks like I'll use Mono, an open-source implementation of .NET that includes a C# compiler and Common Language Infrastructure runtime.

C# and .NET is Microsoft's response to Java and enterprise-grade development. Some people have picked it up, but Java and its enterprise Java beans has pretty much become the dominating solution. I'll be learning a bit more about that when I take the Enterprise Computing class offered this Fall. It's co-taught by IBM Fellow Diane Pozefsky.

Posted by josuah at 4:44 AM UTC+00:00 | Comments (0) | TrackBack

June 22, 2003

Switching Over Server

I went into Sitterson today in order to get the serial number of the Oracle server for Mike Carter. He will be getting the machine transferred over to Phillips Hall. I also changed sendmail.cf so it points at smtp.unc.edu and the hostname to oracle.video.unc.edu with an appropriate IP address and default route. So, that should just about wrap things up.

Posted by josuah at 11:14 PM UTC+00:00 | Comments (0) | TrackBack

May 31, 2003

Oracle Server

It's been a while since I last posted because not much has happened since I put together the VQM Oracle package. The IBM server ordered by Joel Dunn did arrive and was set up by Murray Anderegg with Red Hat 8.0. Since then, it's been temporarily installed in the DiRT colab and I've installed the VQM Oracle package. Right now, Vinay Chandrasekhar and myself are running tests on it to make sure everything is configured correctly. Once I'm satisfied that it is, I'll switch the setup over to what it needs for installation at wherever it is going to end up, and send it over.

Posted by josuah at 10:29 AM UTC+00:00 | Comments (0) | TrackBack

April 10, 2003

VQM Package

I just finished packaging up the software for the VQM Oracle. I cannot include any of the library files used by our server, or the VQM Software, and I've only included the relevant Open Mash binaries. I finished the watcher script that will make sure the server and oracle processes stay up, and created README and NOTES documentation.

At this point, I'm basically done with everything. Unless reports of erroneous behavior come back from Vinay Chandrasekar at NC State, the only thing left to do is transfer these files to the NCNI server when it arrives.

Posted by josuah at 4:04 PM UTC+00:00 | Comments (0) | TrackBack

April 8, 2003

New Dumps

Yesterday I re-dumped all the NCNI sequences in H.261 format. I didn't do them in M-JPEG this time, since the NC State client program is only sending H.263 requests. So now the VQM Oracle is up and running just fine.

I started packaging the VQM Oracle software up as well. I need to finish up some more documentation and create the cronjob watcher script that will keep the server and oracle processes up and running, but once that's done everything should be complete.

Posted by josuah at 6:33 PM UTC+00:00 | Comments (0) | TrackBack

April 6, 2003

Corrupt Frame Dumps & MPEG-2 Playback

I tracked down the problem with my VQM Oracle setup. A whole bunch of frames in the frame dumps were corrupted because I had used fprintf with a "%s" format string. This ended up not dumping out the entire frame because zero values would show up in the middle of a frame. I changed it to fwrite and everything works. But now I need to re-dump the NCNI sequences that Tyler Johnson got for me.

I also tried to run an MPEG-2 video clip through the Java MPEG-2 parsing classes that Ketan sent me, but I couldn't figure out how to playback the video clips I found on the Internet. I want to make sure I can do that so I know what I'm parsing.

Posted by josuah at 8:36 PM UTC+00:00 | Comments (0) | TrackBack

April 5, 2003

Oracle Running; VQM Error

I've spent the past couple of days putting together the automatic processing script for the VQM Oracle. The server and oracle scripts I've written work fine, however the VQM Software itself is choking on the data files. I'm getting the following error:

ERROR: UNDEFINED
Set By: cntl_file scan_line
Message: Reading Data File
File Format corrupted, line missing
Temporal Calibration:

I need to track down the problem and fix it.

Posted by josuah at 3:40 AM UTC+00:00 | Comments (0) | TrackBack

March 31, 2003

Completed Recaster Modifications

I finished making my modifications to the RTP recaster so that it has bandwidth restrictions, jitter, and loss. Seems to work well and correctly.

Posted by josuah at 12:18 AM UTC+00:00 | Comments (0) | TrackBack

March 24, 2003

Extending RTP Recast

I worked on my RTP recast program a little bit today, to add the three new arguments it needs to take: actual bandwidth, loss percentage, and jitter. Still working on completing this addition and testing it.

Posted by josuah at 10:08 PM UTC+00:00 | Comments (0) | TrackBack

March 23, 2003

VQM Oracle Server

I just finished up the Perl server I started working on yesterday. This server will be run on the VQM Oracle to handle simulation requests. It just parses key/value pairs out of an HTTP-like encoded string and creates a request file on disk in the VQM queue directory. The last part to work on is the tool that runs periodically to process all the queued requests.

Posted by josuah at 1:34 AM UTC+00:00 | Comments (0) | TrackBack

March 17, 2003

copybytes

I wrote one of the remaining parts to the VQM Oracle: copybytes. Since the receiver might not receive and dump all transmitted frames, and the VQM Software requires the two frame dumps to be of equal length (i.e. equal number of frames) I need to truncate the original dump to match the reconstructed dump. This program will do that for me.

What's left is to improve my RTP recast program to accept the link characteristics, put together a tool that actually runs through a simulation, and also a server that listens for requests and appends those requests to a queue used by the simulation tool.

Posted by josuah at 5:51 PM UTC+00:00 | Comments (0) | TrackBack

March 12, 2003

NCSU + UNC Integration

I met with Vinay over at NS State. We talked about interfacing the VSAT and VQM Oracle. Basically, we matched up our data parameters and decided to use a simple text stream server that accepts key/value pairs in HTTP encoded format.

The only problem we had is that Vinay has data on inter-frame and intra-frame jitter. It seems that inter-frame jitter is larger than intra-frame, which is why he is measuring these seperately. But I only want one jitter value, especially since I cannot necessarily identify frame boundaries in an RTP dump. So we've sent an email out asking if anyone has a good idea on what formula to use so that both jitter measurements are accounted for but only one jitter value is used.

Posted by josuah at 10:45 PM UTC+00:00 | Comments (0) | TrackBack

March 7, 2003

Not My iBook

I wiped my iBook and reinstalled everything. With Apple's X11 and version 1.0.5r2 of IOXperts' FireWire WebCam driver everything works fine. But the same code with version 1.0.6 only returns the first frame.

Taylor Toussaint from OrangeMicro explained that in Mac OS 10.2.x, their FireWire IIDC driver just uses Apple's supplied video digitizer. But I got the error deviceCantMeetRequest = -9408 when I tried VDGetDigitizerInfo, a required video digitizer function. So I've decided to just not worry about the OrangeWare driver for now.

I did discover that calling VDResetCompressSequence does not start compressing another frame. You still have to call VDCompressOneFrameAsync afterwards. But I removed the check for more than one queued frame since I am using VDSetCompression to turn off any temporal compression and I shouldn't have to request key frames. Maybe that is the problem. Perhaps I should not call VDSetCompression at all (the IOXperts driver defaults to kComponentVideoCodecType = yuv2) and use VDResetCompressSequence to insert key frames.

Posted by josuah at 12:13 AM UTC+00:00 | Comments (0) | TrackBack

March 4, 2003

My iBook To Blame?

Well, I just confirmed that I cannot get video capture to work using the older video-macosx.cc code and version 1.0.5r2 of the IOXperts FireWire WebCam driver on my iBook. But Claudio Allocchio and Gregory Denis both report that the old binaries and 1.0.5r2 work on their fully updated machines. So at this point, it looks like the problem might be just my machine. To test this, I've sent my new code to Claudio and Gregory to see if they can capture video on their machines.

Also, I got a response to my email from Taylor Toussaint at OrangeMicro. Maybe he has some information on why I am having problems opening the OrangeWare FireWire IIDC driver using OpenComponent, but not OpenDefaultComponent. Of course, all of these problems might just be my machine. If that's the case, I will wipe the iBook and reinstall everything.

Posted by josuah at 7:58 PM UTC+00:00 | Comments (0) | TrackBack

March 3, 2003

QuickTime 6 or My Code?

Just as a verification thing, I just tried my Mac OS X video digitizer capturing code for Open Mash with the older 1.0.5r2 version of IOXperts' FireWire WebCam driver and it turns out that doesn't work either. That means the problem is more likely to be a QuickTime 6 or Mac OS 10.2.4 issue, and not driver-related. Especially since the problem also happens under the OrangeWare FireWire IIDC driver. So, either something is wrong with my code, or something changed in QuickTime 6 or Mac OS 10.2.4 that is preventing me from using the Video Digitizer API successfully.

Posted by josuah at 8:24 PM UTC+00:00 | Comments (0) | TrackBack

NCNI Meeting & MPEG-2 Java Package

Ketan and I attended today's NCNI meeting via phone. The gist of the meeting was that both NC State and our part are ready to be integrated together. So I have to arrange a meeting for next week with Vinay Chandrasekhar over there to figure out how our two components will talk to each other.

Other things I need to take care of include writing up the current specifications for vidcap-3 for Tyler Johnson and putting together some documentation on the individual components of the VQM Oracle so future developers can work on it.

I also talked to Ketan about my Adaptable Video project. Since my descriptor class is complete, I need his MPEG-2 parsing classes to build descriptors. He should have something packaged up for me by the end of the week.

Posted by josuah at 4:53 PM UTC+00:00 | Comments (0) | TrackBack

March 2, 2003

Open Mash Mac OS X Progress

I spent several hours working on the Mac OS X video capture support for Open Mash today. Unfortunately, I didn't make much progress.

The OrangeWare driver still fails to open when I call OpenComponent. And the IOXperts driver still fails to capture more than one frame using VDCompressOneFrameAsync and VDCompressDone. I've sent email to both companies and hopefully they can provide me with some answers.

Posted by josuah at 3:29 AM UTC+00:00 | Comments (0) | TrackBack

February 27, 2003

Converted Library Files

All I did today was to convert the H.261 4:2:0 YUV dumps in the VQM Oracle library to BigYUV format. I wrote a pair of batch Perl scripts to do the conversion and renaming.

Posted by josuah at 11:23 PM UTC+00:00 | Comments (0) | TrackBack

4:2:0 Planar -> BigYUV Converter

I finished my 4:2:0 planar to BigYUV conversion program today. So I can convert all of the H.261 dumps in the VQM Oracle library, and also H.261 dumps from vicdump. The chroma lines are doubled, not interlaced.

Posted by josuah at 1:22 AM UTC+00:00 | Comments (0) | TrackBack

February 24, 2003

One Frame Only

I did some poking around into my Open Mash video capture code for Mac OS X and discovered that VDCompressDone only returns true the first time. After that, since it always returns false, vic stops rendering the frame. That's why no image is ever displayed.

At the same time, the OrangeWare driver, for use with the Orange Micro iBot doesn't open when I try to specifically use it with OpenComponent. I am going to try OpenDefaultComponent instead, without the IOXperts driver installed. Maybe that will work.

Posted by josuah at 10:50 PM UTC+00:00 | Comments (0) | TrackBack

Disk Renderer & Adaptable Video Q-Factors

I tested the disk renderer today and it works great. But I discovered some other problems which I wasn't previously aware of.

While verifying my frame dumps in Matlab, I saw that the H.261 dumps didn't parse correctly. Turns out the frame dumping code wasn't dumping out 4:2:0 and 4:1:1 codecs in BigYUV format. So M-JPEG came out fine, but H.261 didn't. I am in the process of writing a converter to change all of the existing H.261 dumps from YCbCr planar to BigYUV.

There is also a problem where vicdump does not receive every frame transmitted. Either my RTP recast program is reporting the incorrect number of frames, or Open Mash is not getting every frame. Of course, the VQM Software needs both dumps to have the same number of frames. Also, for the number of frames that are getting dumped, the total file size is way too small compared to the original YUV dump. I think this may be related to the YCbCr planar instead of BigYUV issue mentioned above.

Ketan and I also had our weekly meeting today. We talked about the q-factors for the super adaptable video descriptor. There is no easy way to skip to a specific q-factor. So no matter what, if you are going to make a change, you need to parse through the entire video file. Unless there is some addressing scheme in the descriptor. But that could easily make the descriptor extremely large, since the q-factor can change on a macroblock basis. So Ketan and I came up with an idea to store in the descriptor a global mapping that translates q-factors in the original high quality version to q-factors in the compressed version. This way, you can very quickly identify which of two versions has better quality, and merge along q-factors by parsing the entire file. So if you compress by scaling the q-factors, you are likely to get better compression than by simply removing coefficients. But it takes longer because you have to do some recoding.

Posted by josuah at 9:49 PM UTC+00:00 | Comments (0) | TrackBack

February 20, 2003

Renderer/Disk

At Andrew Swan's suggestion, instead of using the existing renderers and colormodels, I created a new disk renderer based on the null renderer class. This disk renderer is very simple and just creates the YUV frame and dumps it to disk. I haven't tested it yet, but it compiles into mash and smash, so hopefully vicdump will be able to use it just fine.

Posted by josuah at 6:12 PM UTC+00:00 | Comments (0) | TrackBack

February 10, 2003

Super Adaptive Video

Ketan and I spent most of our meeting today talking about the super adaptive video project that I will be working on for COMP 249. I had gone through and written down all of the possible data partitioning components of MPEG-2 and we went through them to figure out which ones make sense and which ones don't.

We decided to leave out fields and keep video progressive. There are the I, P, and B frames, which need to be sequentially numbered in the original and remembered in partitioned versions. A bit pattern in the descriptor would work, and could be kept small using a fourier analysis plus run-length encoded supplemental bits instead of the complete bit pattern. Although even for a two hour 30fps video, the bit pattern remains pretty managable.

Which DCT coefficients are kept also needs a bit pattern. Since the idea is to let different versions be put together to form a higher quality version, the two versions need to pick different coefficients to keep. The DCT coefficient bit pattern indicates which ones are kept.

For the motion vectors, we decided to not throw them out since reconstructed video probably wouldn't be very valuable without them. However, we require no residual motion vectors, and since portions of the video plane may be cropped out, sometimes external block data will need to be carried over since motion vectors are differentially encoded.

There is also the q-scale, which changes on a block by block basis. Choosing which block is better requires going through both versions of a video and picking those with the lower q-scale. However, we want to be able to determine if there is any real value to be gained by this pass without actually doing the pass. One way would be to generate a hash or checksum of the q-scale values to identify if two versions with otherwise identical descriptors may result in a higher quality reconstructed version. But this still requires you to go through the entire video to find what might simply be a single possible point of improvement. So something better would be much more useful.

Posted by josuah at 10:45 PM UTC+00:00 | Comments (0) | TrackBack

February 4, 2003

Continued Work on Vic Dumper

I continued working on the Tcl necessary for the Vic Dumper application I need for the NCNI video project. Still a lot to do here.

Posted by josuah at 4:21 AM UTC+00:00 | Comments (0) | TrackBack

February 3, 2003

Verified NCNI Dumps

Before I return the DV deck and tape to Tyler Johnson, I wanted to make sure that my video dumps came out okay. So I did a quick run recasting the dumps to make sure there wasn't anything weird or empty frames. Everything seemed fine.

Posted by josuah at 6:47 PM UTC+00:00 | Comments (0) | TrackBack

January 30, 2003

Oracle Slide & More

I was asked by Mladen Vouk at NC State to prepare a few PowerPoint slides about the VQM Oracle for John Streck to maybe use when he goes down to the Miami for a meeting. It took me a while to put this together because PowerPoint is really not well designed when it comes to handling complicated animations. But eventually I got everything how I wanted on a single slide.

At our meeting today, Ketan and I talked a little bit about the project I can work on for COMP 249: Multimedia Networking which is a new type of video codec description that lets you easily take the video apart and put it back together. I got the MPEG-2 specification from Ketan and need to read through that to figure out what parts of an MPEG datastream can be isolated and removed, based on the type of information encoded.

Also, this Friday I am leading the Netlunch discussion on the paper Analysis of a Campus-wide Wireless Network that was done at Dartmouth.

Posted by josuah at 2:54 AM UTC+00:00 | Comments (0) | TrackBack

January 29, 2003

NCNI Dumps & IOXperts Compression Types

Today I dumped three different sequences of approximately 8 seconds from the DV tape given to me by Tyler Johnson. The first is just video focused on a lecturer. The second switches from the lecturer to part of the audience. And the third switches from a projection to the lecturer. It's strange though, that the playback time calculations seem to indicate up to 12 seconds of video. However, that might be because I'm printing information to STDOUT while recasting the RTP packets.

I also worked a little bit on Open Mash and checked the compression types available in the IOXperts FireWire WebCam driver, but the only value returned was the default component YUV format (unless I am traversing the returned list incorrectly). The video digitizer doesn't return an error code when I set it to that format either. But for some reason I'm still not getting frame grabs.

Posted by josuah at 1:03 AM UTC+00:00 | Comments (0) | TrackBack

January 27, 2003

Got NCNI Footage

Alisa Haggard dropped off a DV deck and tape from Tyler Johnson today. The tape has footage from lectures, and Tyler has indicated a few time slices that are most interesting to him. The DV deck has composite out, so I can dump some video soon. I still need to finish the vic dumper application.

I also downloaded the Linux version of the VQM Software and will have to play around with that a little bit.

Posted by josuah at 5:52 PM UTC+00:00 | Comments (0) | TrackBack

January 24, 2003

IE2003 Trip

I got back from my trip to the Electronic Imaging conference in Santa Clara. I didn't pay for any of the presentation sessions, so I didn't get to see anything except posters and company exhibitors. It's a relatively small conference, but some of the posters were kind of interesting.

I think the most interesting were a set of three posters about image database searching and browsing. All of them would classify images according to some criteria. The browsing system lets you move along the database contents according to something like luminence or color. The searching system would find images similar to a sample image. Combining this type of quick search by example with a browsing system could be very useful.

Posted by josuah at 10:13 PM UTC+00:00 | Comments (0) | TrackBack

January 20, 2003

IOXperts FireWire WebCam Driver v1.0.6

IOXperts just released version 1.0.6 final of their FireWire WebCam driver for Mac OS X. Open Mash only supports 1.0.5r3, which is no longer available for download. I got my iBot on Friday, and have been digging into adding more robust support for DCAM capture with both the IOXperts driver and also the one by OrangeWare which is free for the iBot.

There are actually two video digitizers in the IOXperts package, with 'IOx0' and 'IOx1' for the manufacturer, 'DCam' for the subtype. The OrangeWare digitizer, on the other hand, has 'DCAM' for its subtype, 'OPCA' for the manufacturer. For some reason, opening with OpenDefaultComponent works, but OpenComponent doesn't. Of course, the correct way to do this is with OpenComponent after checking for a digitizer that has the capabilities required by Open Mash.

Posted by josuah at 3:24 AM UTC+00:00 | Comments (0) | TrackBack

January 17, 2003

Finished Poster

I finished my Electronic Imaging poster today. I did get around to remaking the figures as EPS files in Adobe Illustrator. It didn't take as long as I thought it might, since I ended up only using two figures. I really need to upgrade to Illustrator 10, since version 7.0 only runs under Classic. Karbon, a vector drawing program by the KDE people isn't ready for real use. I should also get a copy of Adobe InDesign; LaTeX is good for some things, but InDesign is still a lot better for other things. LaTeX is more like Adobe FrameMaker, and my copy of Adobe PageMaker is also limited to Classic.

Anyway, tomorrow I need to see about printing out my poster. I'm not sure what I'm supposed to do to get a nice background on it. The best way would be to get some patterned paper, but it might be that I'm supposed to print out the background. That's something I still don't know how to do in LaTeX.

Posted by josuah at 3:22 AM UTC+00:00 | Comments (0) | TrackBack

January 15, 2003

Minor Updates

I've continued my work on the Electronic Imaging poster. The original TIFF images make huge EPS files; I don't know if I'll have time to create original EPS files instead. If the final PostScript document is too large, the printer may be unable to process it.

I also got an email from Stephen Wolf today. The Linux version of the VQM Software was made available today.

Posted by josuah at 2:10 AM UTC+00:00 | Comments (0) | TrackBack

January 14, 2003

IE2003 Poster

I will be presenting a poster on my paper Implementation of a Real-Time Software-only Image Smoothing Filter for a Block-transform Video Codec at this year's IS&T/SPIE Electronic Imaging conference. The UNC CS department has a large printer with PowerPoint templates, but of course since I don't use PowerPoint I'm making my poster using LaTeX (Adobe PageMaker does not support such large document sizes). It's been slow going but I'm making some good progress, and learning a lot more about LaTeX along the way.

Posted by josuah at 4:57 AM UTC+00:00 | Comments (0) | TrackBack

January 8, 2003

Vic Dumper

With my previous modifications to Open Mash, I can have vic dump RTP packets and YUV frames to disk. The rtp-recast program will recast those RTP packets with the correct interframe delay. But for this to be fully automated, I need a non-GUI version of vic that can be launched (or kept running) and automatically dump the received video as YUV frames to disk for comparison by the VQM Software.

So I started working on a Open Mash application I'm calling Vic Dumper (vicdump) that will do this. I'm basing the Tcl code, which will use smash instead of mash, on a combination of the vic, rvic, and rvic-cl code. That should let vicdump handle the receipt of video streams. The one thing I'm not sure about is whether or not Open Mash will stop dumping frames to disk once it has stopped receiving frames. I think it would, since the decoder would no longer be called and the image buffer is no longer updated.

Since it looks like I'll have to wait a while on the Linux VQM Software and the stock video footage, this is what I can work on for now.

Posted by josuah at 3:11 AM UTC+00:00 | Comments (0) | TrackBack

January 6, 2003

VQM Batch Processing & Non-blocking Video Digitizer Polling

I just got an email back from Stephen Wolf about batch processing with the VQM Software. They have support for that under the UNIX versions, but not the Windows version. Not that surprising, since Windows does not do the command line or remote/automatic program execution very well. (Something Microsoft doesn't like to admit, but they know about. Recent leaked documents show this.) UNC doesn't have any of the UNIX machines needed to run the UNIX versions, but Stephen did mention there is a Linux version in the works. I've got a little bit of time before I really need to automate the VQM Software, since NCNI is still working on getting the stock video footage for me.

Unrelated, Claudio Allocchio is going to run some audio and video tests under Mac OS 10.1.5 tomorrow. The audio response time should be much better as I described in an earlier entry. But there are problems with the IOXperts beta FireWire WebCam driver. I've sent Claudio a version of the video capture code that switches from blocking to non-blocking polling of the video digitizer. This lets the Tk thread get back to handling other events. I don't think this will fix the capture problem, but it should stop vic from getting stuck in the polling loop and freezing for the user. Hopefully some error messages will get printed out or something which might explain what's wrong with the video code with the beta driver.

Posted by josuah at 10:54 PM UTC+00:00 | Comments (0) | TrackBack

PC VQM Software

I downloaded and tried out the VQM Software on my Windows 98SE system. The software launched fine, although it's only supported on Windows 2000. However, it looks like we may not be able to use it. The software is designed to be used through a GUI, and there wasn't any mention of a command line interface. I'm guessing that I'm going to have to implement the VQM algorithm myself to compare two Big YUV files. I've emailed Stephen Wolf just in case he had any ideas.

Posted by josuah at 9:29 PM UTC+00:00 | Comments (0) | TrackBack

January 4, 2003

Improved Mac OS X Audio Performance

Yesterday, Claudio confirmed that my change to the Mac OS X audio code in Open Mash does noticeably improve the response of audio input on 10.2.2. Given that, it should no doubt have an even more drastic improvement under 10.1.5. So I committed the change.

However, Claudio reports that my changed video code only makes vic freeze on transmit with the 1.0.6bX drivers from IOXperts. I will need to get a FireWire camera and try to debug this myself, as it will be too difficult to keep going back and forth with Claudio or other testers on this problem.

Posted by josuah at 5:30 PM UTC+00:00 | Comments (0) | TrackBack

December 30, 2002

PVR Paper Submitted

I just submitted "Evaluating the Effectiveness of Automatic PVR Management" to ICME 2003 and FAXed the copyright form to them. Turns out our paper number was 1442 and not 1448 as I thought. So, everything should be all set. Now I'm going to put the paper up here as well.

Posted by josuah at 6:08 PM UTC+00:00 | Comments (0) | TrackBack

December 28, 2002

Evaluating the Effectiveness of Automatic PVR Management

That's the long title of the paper Ketan and I are submitting to ICME 2003. Ketan emailed me earlier today with his revisions and formatting changes, and I did some spellchecking and minor clean-up. Some parts of the paper look kind of squished, but it fits in at 4 pages now.

Unfortunately, I'm having trouble submitting it to ICME 2003; it looks like our submission was deleted since we hadn't uploaded a paper yet. I'm emailing Ketan about this now because I don't know the review category he wanted to submit this too.

Posted by josuah at 7:47 PM UTC+00:00 | Comments (0) | TrackBack

December 24, 2002

ICME 2003 Submission

I just finished up the figures and writing up the model and results sections for the TiVo Nation paper Ketan and I have been working on for the past couple of months. We are submitting it to ICME 2003. The deadline is December 31, so we still have some time to polish it up. Ketan is out of town right now, but he's going to look it over and let me know if there's anything else I need to add or change. If not, we're pretty much all set to go. It's five pages right now and the limit is four pages, so Ketan is going to have to cut some of his introduction fluff out. There's not much I can do to cut down on the parts I wrote. I'll add a link to this paper on my home page and list it on my resume after everything is finalized.

The basic conclusion is that if you are picky enough or don't know about enough of the shows, then an automatic PVR can do much better than you could on your own.

Posted by josuah at 10:42 PM UTC+00:00 | Comments (0) | TrackBack

December 22, 2002

XFIG Plots

With the correct simulation data, I created a 3-D graph of content utility distribution (CUD) 10, decay 0.975, and consumption 8 and the corresponding human-automatic intersection using Gnuplot. I dumped this out to XFIG format. I then made graphs of the CUDs with decay 0.975 and consumption 8, the decays with CUD 10 and consumption 8, and consumptions (2, 4, 6, 8, 12) with CUD 10 and decay 0.975. I'm going to email these off to Ketan to see what his thoughts are.

Posted by josuah at 10:01 PM UTC+00:00 | Comments (0) | TrackBack

Another Error Calculation Error

I found another big problem in the automatic policy utility error calculations. Instead of returning an error value between -1.0 and +1.0 (e.g. +/- 0.05 for 5% error), I was returning error values one hundred times greater. I was coding the error margins incorrectly as integers instead of the fractions they should be. So my +/- 50% error was adding or subtracting up to 50 from the utility value which is only supposed to range from 0.0 to 1.0 (or -0.5 to +1.5 with a 50% error range). So once again, I'm rerunning the simulation.

Posted by josuah at 12:05 AM UTC+00:00 | Comments (0) | TrackBack

December 20, 2002

Rerunning Simulation

Turns out I accidentally had the error utility calculations equal to "utility * error" instead of "utility + error". That might explain why the automatic policy always had the same average real utility despite the error range. So I'm rerunning the simulation with that fix. Once that's done I'll pass things through my extract and split programs, and then the intersect program to find the nicest CUD x decay pair for the 3-D graph and consumption rates.

Posted by josuah at 3:57 AM UTC+00:00 | Comments (0) | TrackBack

December 19, 2002

TiVo Intersect Tool

I just finished writing a quick and dirty Perl script to spit out the (awareness x error) points from a set of human and automatic policy utility data points. Now I'm running larger set of simulations with +/- 5%-50% error in 5% increments and 1%-15% awareness in 1% increments with the existing content utility distributions (CUDs) and decay rates for a consumption rate of 8. Once that's done, I'll pick a good pair of CUD and decay to run with other consumption rates. I haven't decided what other four consumption rates to use, but I think 2, 4, 12, and 16 might be good. Those correspond to a person watching 1, 2, 6, and 8 hours of television a day.

Posted by josuah at 7:30 PM UTC+00:00 | Comments (0) | TrackBack

December 18, 2002

Mac OS X Open Mash Testing

I got around to making some possible bug fixes addressing the performance and driver issues reported by some people regarding Open Mash on Mac OS X.

I made a one-line change to audio-macosx.cc to increment the available bytes counter as each input byte is stored in the ring buffer instead of updating the available bytes all at once after all the bytes have been dumped into the ring buffer. This may or may not improve performance under Mac OS 10.1, but I hope it does. Perhaps the input buffer was really large so all this playing with memory was taking a while. But if it's the resampling and filtering that is causing the slowdown, then this isn't going to make much difference. In that case, I'll have to breakdown the resampling and ring buffer population into two or more iterations. That could be the solution. I've sent my changed file off to Claudio Allocchio and Denis DeLaRoca to see if they notice any improvement under Mac OS 10.1.5.

I also added a VDSetCompression call to video-macosx.cc to hopefully address the change in the IOXperts FireWire webcam driver. I've sent this to Claudio to try out, but also to Paolo Barbato because it might make the IOXperts' USB webcam driver work as well. Apple's API and abstraction is very nice, in that I don't even have to care what's hooked up because everything uses a Video Digitizer. I just have to make sure my API calls are correct. If this change doesn't fix things, I'll have to ask on the IOXperts' beta discussion lists.

Posted by josuah at 11:33 PM UTC+00:00 | Comments (0) | TrackBack

TiVo Utility Graphs

I just got back from a meeting with Ketan where we talked about where to go from here, now that we have some good results. The basic idea is to compare human awareness (X-axis) to automatic error (Y-axis) as a measure of utility (Z-axis). The intersection of the human and automatic curves should create a line that indicates the point where a human does better than a heuristic policy (and vice versa). The idea now is to make three 2-D graphs of human versus automatic for five content utility distribution (CUD) functions, five time decay values, and five consumption rates. And also a 3-D graph providing an example of the X-Y-Z graph I described above.

Ketan's already made quite a bit of progress on the write up. My job now is to create a program that given the two data sets computes the intersection to create the 2-D graphs. The five existing CUDs are good, but we want to get more data points under 10% human awareness, since that appears to be the point where the human policy always does better. So maybe 1%-15% will be our awareness range. And push up error to as much as +/-50% since +/-25% doesn't always seem to show much difference. I also need to find a good fixed point of CUD and decay for the 3-D graph, which will also be the fixed point used for the five consumption rates.

Posted by josuah at 7:56 PM UTC+00:00 | Comments (0) | TrackBack

December 14, 2002

Power Restored

It's been a long time since I last put in an entry because I just got power restored. An ice storm hit Wednesday night last week and I was out of power until now. The short of things is that I haven't worked on the NCNI project since my last entry, but I Ketan and I have done a lot of rethinking and remodeling of the TiVo Nation simulator.

Our first and second attempts ran into problems because we didn't accurately consider several things, and used parameters that really don't matter at all (although they seemed somewhat important before). Right now, I'm changing the model so that we don't care about compression but instead are only interested in at what point of error does a fully aware system perform better than a semi-aware human. We also did not include a utility decay factor in my previous simulations, although I remember we had talked about that a long time ago; that's why I have always had a timestamp attached to objects. Without this decay making old shows less useful, since a user can never watch as many shows as could be recorded, the cache simply becomes saturated at some point.

So, I'm going to do some runs this weekend and meet with Ketan on Monday to discuss things. I'm think we'll get some good results this time. The submission date has been pushed back to December 31, so we have some time left to do this right.

Posted by josuah at 7:37 AM UTC+00:00 | Comments (0) | TrackBack

November 27, 2002

The Open Video Project

Well, I found the web site Ketan was talking about: The Open Video Project. Unfortunately, the majority of its files are in one of MPEG-1, MPEG-2, or MPEG-4, and there is no raw data available. Probably in large part due to the capacity requires that would be associated with storing the raw frames in full detail. It did surprise me that MPEG-4 versions of the same clip were approximately 10% of the MPEG-2 size. Regardless, this is a dead end.

Posted by josuah at 3:21 PM UTC+00:00 | Comments (0) | TrackBack

Larger TiVo Simulation

I added a bunch more policies to the existing TiVo Nation simulator and am running that right now. I'm guessing this will generate about 25,000 to 30,000 data points. Somehow I have to get all those data points onto graphs. Unfortunately there are about 7 independent variables that are combining to generate those tens of thousands of data points (and that's after taking only average, variance, and standard deviation). That either means a whole lot of graphs, or a single 7-dimensional graph. The latter would be a lot nicer to work with, but impossible it would seem.

Posted by josuah at 3:14 PM UTC+00:00 | Comments (0) | TrackBack

November 26, 2002

NCNI Meeting

Ketan and I attended another NCNI meeting today. Most of the time was spent discussing the path characteristics program a different student is working on. The information gathered by that program would eventually be used by our oracle to model the transmission link.

As for my project, we told people where we are right now. Tyler said he would get us some real video footage in digitized format (hopefully) which we can use to generate our actual test library. Ketan has seen mention of some sort of open video test library which I will look for. That might provide us with some good test footage, assuming it has the video in raw form and not after it has been encoded. Otherwise we would be degrading an already degraded stream. I also need to test out the Win32 version of the VQM Software. This will need to be automated somehow, and that might prove very difficult if the software is entirely GUI based.

Regarding my other project, the TiVo Nation simulator, I need to get a few more graphs, including one where the TiVo-Human cache policy becomes increasingly aware of the total object space. Looks like I'll be generating another several thousand data points. By the time Ketan gets back from his trip/vacation, I should have all those plots finished up so we can just put together the paper for submission.

Posted by josuah at 2:25 AM UTC+00:00 | Comments (0) | TrackBack

November 24, 2002

TiVo Simulator Data

The TiVo Nation simulator finished all the tests in about 10-15 minutes. It generated about 9MB of data. I processed the raw output utility values, dropping the first 24 hours (48 iterations) of the data to allow the cache to somewhat stabilize. I then calculated the average, variance, and standard deviation. At first glance, it seems that the average utility is very close between the two different cache policies tested (TiVo-human and fully-aware approximating) but the variance and standard deviation is much lower with the fully-aware approximating policy. I need to plot some comparative graphs, and also try a third fully-aware approximating compressing cache policy. This policy should show some real gains over the TiVo-human policy.

Posted by josuah at 10:12 PM UTC+00:00 | Comments (0) | TrackBack

November 22, 2002

TiVo Simulator Plugged

I found the memory leak in the C++ version of the TiVo Nation simulator. In Perl I didn't have to worry about the objects that weren't added to the cache, since they would no longer be referenced and automatically garbage collected. Not so, of course, under C++. I had forgot to check if the object was actually added to the cache and if not to specifically delete it. No memory leaks anymore. The simulator is running right now and seems to be using a fixed 672 bytes of memory. In just under 6 minutes, I've gotten over 3MB of data. It would have taken the Perl script an hour or more to generate the same amount.

Posted by josuah at 10:10 PM UTC+00:00 | Comments (0) | TrackBack

C++ TiVo Simulator Leak

I completed the conversion of the TiVo Nation simulator from Perl to C++. But there's a massive memory leak which I'm trying to track down and I also need to actually verify that the logic is correct. That's the problem with a low-level language like C++; things just work and it's a lot easier to express ideas in a high-level language like Perl. On the up side, it's running at least an order of magnitude faster. (As would be expected by losing all of those eval statements in Perl.)

Posted by josuah at 8:01 AM UTC+00:00 | Comments (0) | TrackBack

November 20, 2002

Mac OS X Open Mash Issues

I've been corresponding with Claudio Allocchio, an Open Mash on Mac OS X user, for the past couple of weeks. He's run into a few problems with vic and vat.

When transmitting audio using vat from his iBook the loopback is delayed 0.5 to 0.75 seconds, and a receiving vat running under Linux is delayed up to 3 seconds. That's much larger than the 16 * 20ms ring buffer used, so I'm not sure where that's coming from. I've asked him to print out some ring buffer indices while testing to see if that shows anything.

He's also had a problem with vic crashing with the following error messages when he tries to use the "Mac OS X" source option. I'm guessing this occurs when he doesn't actually have a device attached; I need to add the appropriate checks to the code to ensure choosing "Mac OS X" does not do anything when there is no capture device.

QuickTime error [-32767]: VDCompressOneFrameAsync
QuickTime error [0]: OpenDefaultComponent
Segementation Fault
(crash)

Posted by josuah at 6:07 AM UTC+00:00 | Comments (0) | TrackBack

Running TiVo Nation Simulator

I just finished making my latest revisions to the TiVo Nation simulator and am now running a test for two weeks (672 half-hour iterations) on a box that can hold up to 60 full-size objects. It's dumping the results to what will be a very large text file.

Posted by josuah at 5:53 AM UTC+00:00 | Comments (0) | TrackBack

November 19, 2002

Some TiVo Simulator Curves

I made some modifications to the TiVo Nation simulator code as described in my previous journal entry. I now have three curves for the space and utility curves, but a large number of error curves because what we want to find out is at what point the different policy behaviors cross. I still need to implement the new policies described and fix the cache utility versus actual utility calculations, which I will do tomorrow, and then run the simulator to get some utility plots.

Posted by josuah at 3:28 AM UTC+00:00 | Comments (0) | TrackBack

November 14, 2002

TiVo Nation Discussion

At our meeting today, Ketan and I talked a bit more about what tests to run on the TiVo Nation simulator. To keep the total number of tests down to a minimum, we decided to use one low, average, and high curve for each input parameter. This corresponds to standard, broad, and narrow distributions and linear, fast growth, and slow growth curves. Margins of error will be zero, moderate, and large. He also pointed out that the quality versus space curves should not have any margin of error applied, since it's fairly predictable what size will result from a particular codec given a quality value. I also need to fix the simulator so that although the cache policy looks at the utility with the margin of error, the actual utility values output should not include the margin of error (since the viewer knows exactly what a particular object's utility is).

We also discussed some cache policies. To model an existing TiVo user, we came up with a scheme where the user only has awareness of a certain percentage of the object space. Anything outside this portion of the space will be ignored. Anything inside this portion of the space will be placed into the cache, removing those objects of lessor utility. This models the current TiVo replacement policy which requires the user, who only knows about a certain number of shows, to delete cached shows to make room for a show the user would like to watch. In contrast, a simple automatic cache policy would know about the entire object space, and remove objects of lessor utility to make room in the cache, but would have some margin of error for determining object utility. Comparing the average utility curves as margin of error ranges to the average utility curve as viewer awareness ranges will provide an interesting view of when the automatic cache policy becomes useful. A natural extension of the automatic cache policy is one where objects are compressed to try and maximize the number of objects and overall utility.

Unfortunately, neither of us has any idea of how to even start looking at what sort of distribution, consumption, or utility curves would accurately model real television viewing behavior. The object arrival rate is relatively constant since there are a set number of channels broadcasting at any given time. We can sort of guess as to what sort of curves are appropriate for the other things, but there's nothing we can point at to support our choice of curves. Mark Lindsey had looked into this a little bit, so Ketan suggested I email him to see if he had any idea.

Posted by josuah at 12:02 AM UTC+00:00 | Comments (0) | TrackBack

November 12, 2002

TiVo Nation Simulation Framework Complete

I just fixed the problem I was having with the anonymous subroutines from an external file. So the framework is complete and now all that's left is to actually run some simulation tests.

Posted by josuah at 6:12 AM UTC+00:00 | Comments (0) | TrackBack

TiVo Nation Framework

I just finished up the basic framework for the TiVo Nation simulation script. I figured out some good mathematical equations for representing the exponential and found some for the distribution curves, and verified them using Graphing Calculator by Pacific Tech. By using Perl's eval function it's very easy to simply write these equations out in a separate file and then include them in the simulation.

Unfortunately I've run into a little problem of trying to incorporate anonymous subroutines from an external file into the program. I can't seem to assign an array of anonymous subroutines. I'll have to figure out what's going on there otherwise it'll be harder to include arbitrary policies in the tests.

Posted by josuah at 3:14 AM UTC+00:00 | Comments (0) | TrackBack

November 6, 2002

TiVo Nation

I just got done with my weekly meeting with Ketan. Tyler Johnson did reply to my email but his only response was to schedule another meeting for November 25th at MCNC. So I don't have anything to do between now and then unless the VQM Software is made available for Win32.

So we started talking about what else I might be able to do between now and then. Ketan's had a noticeable obsession with TiVo and a new cache and distribution scheme he calls TiVo Nation. The basic idea is to use a combination of different caching policies based on local degradation with possibilities of reconstruction or P2P caching. In other words, store more things in your cache at lower quality, with the possibility of grabbing stuff out of your neighbors' caches and maybe restoring things to the original quality by combining what you get from other people with what you have.

Anyway, ICME 2003 has put out a call for papers with a December 15 deadline. The paper is only supposed to be four pages. So Ketan's idea was to do some cache policy versus user utility comparisons, given a particular quality/space tradeoff, in an effort to figure out at what point it makes sense to use different caching policies. This is regardless of the P2P or reconstruction aspects described above. This could be simulated with fairly simple mathematical models in a relatively quick manner, and produce enough content for a four page paper.

There are a few different input parameters to consider when putting this model together. There is a quality versus space curve that represents the compression capabilities of an arbitrary codec. There is a quality versus utility curve that represents how much an object is worth to the user at a given quality. There is a content utility distribution (think bell curve) that represents how much an object is worth to the user based on its content. Two additional parameters are needed which are object arrival rate and object consumption rate, representing the object broadcast rate and user's object deletion rate. Some additional variance is added by having the content and/or quality versus utility curves have some margin of error, and by using non-uniform consumption rates.

Anyway, I like Perl and that's what I figure I'll use to put this together. If that proves too slow, then I'll use embedded Perl to compile a C executable that will still let me parse those input parameters extremely easily from plain text files. At least, that's the plan for now.

Posted by josuah at 9:17 PM UTC+00:00 | Comments (0) | TrackBack

November 1, 2002

No Chord Test Collaborator

Well, I asked all the Duke students who did attend today's TONIC meeting if any of them would be interested in running some Chord tests on the Modelnet running at Duke. Not too surprisingly, none of them were interested.

Posted by josuah at 8:20 PM UTC+00:00 | Comments (0) | TrackBack

October 31, 2002

Emailed Tyler

As I mentioned in my previous post, we need to figure out the best way to gather link characteristics from endpoints. Since I thought ICMP packets would be the best way to do this and would let endpoints initiate traces by themselves, I emailed Tyler Johnson and asked if the NCNI routers would handle ICMP packets differently from UDP packets. Hopefully the answer will be no. I also asked him what sort of systems clients will be using, since our link probing software will have to run on those systems. Once he gets back to me on that I will be able to start writing up a new tool or choosing an existing tool to do this probing.

Posted by josuah at 7:41 PM UTC+00:00 | Comments (0) | TrackBack

More Dumps and More

I got a bunch more video dumps off vidcap4-cs from the cable TV hookup. I used FXTV to change the channel and tuner settings. I grabbed some from The Weather Channel, FOXNews, and Nickelodeon, among others. I also wanted to get some sports and music videos off ESPN and MTV, but they weren't showing anything suitable for several dumps over a period of minutes. All in all that would give me a good spectrum of video stream types.

At today's meeting with Ketan, we talked about what's necessary to get link characteristics. I think a great way to go about this would be to get ICMP traces instead of TCP or UDP because ICMP does not require both endpoints to cooperate. This way, a single endpoint can gather link statistics without having to worry about setting anything up with the other end. However, this raises the interesting question of whether or not routers will treat ICMP packets differently than UDP packets (which is what RTP is encapsulated in). Ketan suggested I email Tyler Johnson with a status update to find out how well ICMP packets will work, and also what sort of environment we can expect on the enduser systems. Tyler can answer this because many of those clients will be using the NCNI infrastructure to communicate, so the intermediate routers and client environments are known.

On another topic, Ketan also suggested I look into collaborating with a Duke student to run some Chord stress tests in either the DiRT lab or Modelnet at Duke. Using Modelnet makes more sense because it can better emulate a WAN while the DiRT lab is pretty much a heterogeneous, isolated, and dedicated network. I'll look into this at this week's TONIC.

Posted by josuah at 6:17 AM UTC+00:00 | Comments (0) | TrackBack

October 28, 2002

Cable TV

Since Ketan suggested grabbing some video sequences off cable TV (there is a cable drop into just about every room in Sitterson), I went about trying to get vic to pick up the cable signal and use it. Didn't have any luck at first, so I poked around for sample code of how to interface with the TV tuner on a Bt878; the card in vidcap4-cs is one of the Hauppauge WinTV models. I found FXTV which does everything necessary. I'm going to take a look at the source for FXTV and see how I might go about adding Bt8x8 TV tuner capabilities to Open Mash. This may or may not be something I actually get around to doing, as this isn't really necessary for the NCNI project.

Posted by josuah at 10:49 PM UTC+00:00 | Comments (0) | TrackBack

October 25, 2002

Some Sitting Captures

I didn't come in to NetLunch today but I did grab several frame sequences from home via vidcap4-cs, which is hooked up to the fourth video camera in 155 Sitterson. I grabbed video of a few people sitting in chairs using h.261 with qualities 10 and 1 at CIF and QCIF sizes, and using MJPEG with qualities 30, 60, and 90 at 640x480, 320x240, and 160x112 sizes.

One annoyance with the current dump code is that the RTP dump filename does not match up with the YUV dump. Unfortunately, the base encoder class does not guarantee knowledge of the frame characteristics, so I'm not sure how I can go about fixing this without more tightly coupling the frame with the encoder. The base encoder is only guaranteed to know about the RTP packets.

Posted by josuah at 4:10 PM UTC+00:00 | Comments (0) | TrackBack

October 24, 2002

NCNI Progress Update

It's been a while since I last posted anything about the NCNI video quality project. This is because we've been waiting on the VQM Software. It was made available today to people who sign the electronic license agreement mentioned earlier, but only for HPUX, IRIX, and Sun OS on non-x86 processors. The x86 Windows NT/2k version is not yet available and won't be available for a couple more weeks.

I thought the VQM Software would be available for Windows NT/2k at the beginning of this week, but Stephen Wolf has stated it will be 2-4 more weeks. So, Ketan and I briefly talked about what sort of image sequences I should dump to disk. I'm going to try and capture some video conferencing sequences tomorrow during the weekly NetLunch discussion. Ketan also suggested grabbing some dumps from the internal cable TV drops; I need to see if there's a TV tuner in Open Mash for that. We didn't get a chance to discuss what tools would be best suited for characterizing links or the best way to use that information.

Posted by josuah at 6:34 PM UTC+00:00 | Comments (0) | TrackBack

October 20, 2002

Mac OS X Video Capture Complete

After several hours of work, I finished support for Open Mash video capture in Mac OS X with the IOXperts Universal FireWire WebCam driver.

Video capture is currently done with asynchronous compressed frame grabs. It may be faster to do this with a sequence grabber instead, but I can't tell right now as capturing is CPU bound on my iBook (clamshell G3-466MHz Special Edition). The IOXperts driver always returns a 640x480 kComponentVideoCodecType (yuv2) image and this is cropped and converted from packed to planar.

So finally the Mac OS X port of Open Mash is complete. The first and only set of MBone applications for Mac OS. After the next official binary release of Open Mash, I'll try to get it posted on Apple's Mac OS X Downloads site.

Posted by josuah at 9:57 AM UTC+00:00 | Comments (0) | TrackBack

October 14, 2002

Replayed Dumped Video

Chema got back to me and informed me that vic actually listens for RTP streams on even ports only, and the RTCP stream should show up on the odd ports. This is probably mentioned somewhere in RFC 1889 but all I knew was that the RTP and RTCP port numbers were right next to each other. So I was able to use 'netstat -f inet' to figure out exactly which even numbered port vic was listening on for the RTP stream.

Once I got that figured out, I was able to get the receiving vic to acknowledge my recast RTP packets without sending the RTCP stream. However, my recast program was sending them back out way too fast for vic to display the frames; all it would show is the gray thumbnail. So I calculated the frame delay by taking the difference between RTP packet timestamps and reversing the formula used to generate the timestamps to get a delay time in microseconds.

With RTP packets going out to the correct UDP port and delayed to match the original send pattern as best as possible (as the timestamps are approximations themselves), the receiving vic plays back the recorded frames perfectly. Looks like this part of the project is complete. All I need now is the VQM Software.

Posted by josuah at 11:24 PM UTC+00:00 | Comments (0) | TrackBack

Dumped RTP Headers

I wanted to check out the RTP headers of the RTP packet dump I generated last week, specifically to look at the timestamp value and also to see what the header fields look like over the sequence of packets. I can see that the marker bit is set in the last packet of each frame. The sequence number is monotonically increasing. The synchronization source ID is identical. But the 32-bit timestamp, which is supposed to be based on hardware ticks, is either wrapping around quite quickly or not monotonically increasing. This timestamp is based on the UNIX clock if not coming from the capture device.

So, I need to find out first if the RTCP stream is required for Open Mash to acknowledge and process an RTP stream. And second if the RTP packets must be delayed to somewhat match up with the timestamps.

Posted by josuah at 6:55 PM UTC+00:00 | Comments (0) | TrackBack

October 12, 2002

ImageDescriptionHandle OK

So the problem was I needed to allocate memory for the ImageDescriptionHandle, which is of type **ImageDescription. Basically, it needed 4 bytes allocated with NewHandle() to act as the pointer; VDGetImageDescription() was not assigning an ImageDescriptionHandle but instead an ImageDescriptionPtr (which is of type *ImageDescription). So now that I got that straightened out, the actual frame grabbing seems to work.

I've set the Video Digitizer compression to 16-bit kComponentVideoUnsigned (yuvs) with a bounding rect (0,0,352,288). But the image data returned by VDCompressDone is 614400 bytes and appears to be in 16-bit 640x480 format instead of CIF size 352x288. Regardless, I have a pointer to the image now, so I just need to figure out how to write the VideoCaptureMacOSX suppress() and saveblks() methods and vic should be able to transmit something, even if it looks like junk because I got the internal formats wrong.

Posted by josuah at 12:28 AM UTC+00:00 | Comments (0) | TrackBack

October 11, 2002

VideoDigitizer Call Sequence

I did some more work on adding video capture support to Open Mash but some things aren't working right. NewTimeBase() is choking with a Bus Error, so I commented that out. So I couldn't call VDSetTimeBase(). I also should be using VDGetCompressionTypes() to choose a correct compression type for VDSetCompression(), but since Mash only supports YUV I'm just going ahead and setting it to that explicitly. That's not throwing an error, so I'm guessing that's okay. But VDGetImageDescription() is returning -50 (bad parameter) and I don't know why.

I also emailed Wei about the weird DELAY define which causes parts of degas to be included only on Mac OS X. Apparently DELAY is defined in Apple's gcc for some reason. Wei suggested I check on this by passing the -E argument to gcc. If it turns out DELAY is defined anyway, I'll get around to changing DELAY to DELAY_DEGAS or something like that so it won't cause this problem on Mac OS X.

Posted by josuah at 2:57 AM UTC+00:00 | Comments (0) | TrackBack

October 10, 2002

VQM Software Available Soon

I just copy an email back from Stephen Wolf over at the Institute for Telecommunication Sciences about the VQM Software. They've finished drafting the no-cost evaluation license agreement and after we read it over, sign it, and send it back we should be able to download a copy of the software for research purposes sometime in the next couple of weeks. The timing is just about perfect. I've forwarded the license agreement to Ketan for review but I don't see any real problems with it.

Posted by josuah at 2:55 PM UTC+00:00 | Comments (0) | TrackBack

October 9, 2002

More Dumping to Disk

Since I don't want to dump frames to disk all the time, I made some further modifications to Open Mash and specifically vic. Now you can type in a filename and click on a checkbox to start capturing frames to disk right after they are grabbed by the capture device. This is important because it lets me choose when to start and stop recording. Without this, the dump file would have to be post-processed to only include the portion of video I'm interested in. I'm thinking about doing the same thing for reconstructed video, but since I don't really need to do that that may not happen. I haven't decided whether or not to commit my UI changes back into CVS. Maybe I'll just comment out the Tcl/Tk code that draws the UI for this feature and commit everything that way. I'm going to commit my changes to codec/device.[h|cc] no matter what, though, because that append function will be incredibly useful for someone else later on.

I also met with Ketan for our weekly meeting and gave him a progress report. The two of us tracked down where to dump the RTP packet data in rtp/session-rtp.cc. So I'll add that shortly. I'm going to try and link the dumping of the YUV frames to the dumping of the RTP packet data, which will make it much easier to output synchronized RTP packets and the YUV frames being stuffed into those packets. If they are out of sync, then any comparisons will obviously be inaccurate.

No news yet on the licensing for the VQM Software. I've emailed them again to see how that's coming along.

Posted by josuah at 10:34 PM UTC+00:00 | Comments (0) | TrackBack

October 8, 2002

Dumping BigYUV to Disk

Turns out the cameras in 155 Sitterson are just fine. I ran around the entire building today looking for a composite video cable to try and test the cameras directly with the projectors, but when I got back to 156 Sitterson, David Ott was there and explained that the little light on the front needs to be green and not red. Red is some sort of standby mode. Once we found the camera remote I was able to activate all of the cameras and the signal came over the S-Video just fine.

I then modified video/video-meteor.cc (the device used for Brooktree cards) and codec/device.[h|cc] so that I could append frames onto a file. When using YUV 4:2:2 (e.g. JPEG) I dump in BigYUV format--the format expected by the VQM Software. I tested vic with JPEG large, highest-quality, maximum bandwidth/fps and was able to get up to 27fps on vidcap4-cs. So dumping to disk is not putting a significant strain on the process; 27fps is approximately where vic got before I made these changes. vidcap4-cs is a Pentium III 1GHz with 256MB PC-133 SDRAM and 9GB hard disk. I'm not sure what the specifications of the hard disk are, but at 27fps the dump file quickly approached 100MB in just a few seconds.

Posted by josuah at 8:09 PM UTC+00:00 | Comments (0) | TrackBack

October 7, 2002

vic on Vidcap Machines

I compiled the Brooktree 878 device into the kernels on vidcap3-cs and vidcap4-cs, and created the device file /dev/bktr0. I then grabbed the latest Open Mash out of CVS and compiled it into my Vidcap local user account. vic is running fine, but I can't find a signal on the S-Video port of the Bt878 cards where the cameras in 155 Sitterson are plugged in. All I'm getting is a deep blue screen. There's some other signal going into Port-1 of the Bt878 which I can view as a bunch of multicolored snow, so I know the card is working and vic is talking to it. I must need to do something special to either turn on the cameras or activate the signal.

Posted by josuah at 8:44 PM UTC+00:00 | Comments (0) | TrackBack

October 2, 2002

Vidcap(3|4) Installed

Yesterday the ghosted vidcap3-cs and vidcap4-cs machines were returned to the DiRT lab. Today I installed the FreeBSD partition with 4.6.2-RELEASE and a DiRT kernel. So those machines are pretty much ready for some video capture testing with the cameras in 155 Sitterson.

I also had my meeting with Ketan today. We talked a little bit about the data path for this system. After I install Open Mash on these vidcap machines, my first objective will be to start capturing video from the cameras and dump the raw YUV frames to disk. Afterwards, those raw frames need to be read back into vic as a source and encapsulated in RTP packets which will also be dumped to disk. Those packets can then be sent out for test purposes without going through the encoding process again. Finally, a receiving vic needs to dump the raw YUV frames of the reconstructed video out to disk for comparison by the VQM Software.

Posted by josuah at 10:27 PM UTC+00:00 | Comments (0) | TrackBack

October 1, 2002

Using dummynet

I downloaded the latest Open Mash source from CVS and compiled it on Effie. It actually compiled pretty fast over NFS, probably because home directories are on the new WD1200JB hard disk. Effie is a PII-266 with 256MB SDRAM, and I think it compiled faster than when this machine was running Solaris 8 Intel. Of course, Solaris creates much larger binaries of Open Mash, so that might be the reason. But the compile time seemed pretty fast to me regardless. I wonder what performance will be like when Binibik is running nightly tasks, since / and /home share a single IDE controller.

After compiling, I got dummynet going and saw frame rates of MJPEG drop very quickly as packet loss increased. Unfortunately, ipfw also decided to drop a bunch of error messages which got quite annoying and on a slow machine this may actually reduce the amount of CPU available to vic.

Posted by josuah at 4:18 AM UTC+00:00 | Comments (0) | TrackBack

September 28, 2002

TONIC & Vidcap Machines

Today I presented Chord at this week's TONIC meeting. If you're interested, you can view my slides; it's just regular HTML and GIF images. You will need basic JavaScript support in your browser when you view the slides. A lot of the discussion about Chord revolved around whether or not Chord made sense given today's applications, or if there was anything missing from Chord which would make it more useful or appropriate.

Ketan noted that it's very difficult to do range lookups or searches, since Chord is a distributed hash table and that's not what hash tables are good for. But as with databases, you can usually address this problem by adding a supplemental lookup table which lets you traverse the keys in some sorted order. Someone at Duke who was at TONIC today is actually working on this.

Amyn argued that having everything distributed everywhere doesn't make much sense, and that it is more productive to specifically choose nodes to provide your service based on the capabilities of that node. I don't completely disagree (ref. William Gibson's Idoru), but I do think there are real applications which would directly benefit from Chord. Namely, it would be great to have my home running off a DHT like Chord in conjunction with Zeroconf. That way, plugging in my new WD1200JB would add 120GB to every single system on my network, instead of just Binibik. I bought this drive because I needed more home directory space. But I am also running out of local disk space on my workstations. Adobe Photoshop could really benefit from something like this. The question of what happens when a disk goes down can already be addressed with RAID principles. A harder question is what happens when you take a machine somewhere else where it no longer has access to the "local network disk" (I will coin this term to refer to a local disk actually built on a network; i.e. not really local but also not remote). I think you would have to be able to specify specific blocks of data as guaranteed local, and then this would work.

In other news, the Vidcap machines were taken out of the DiRT lab to get partitioned for Windows 2000 and FreeBSD. I will be installing FreeBSD onto a second partition after those machines are returned to the lab.

Posted by josuah at 2:36 AM UTC+00:00 | Comments (0) | TrackBack

September 25, 2002

Dual-boot Vidcap(3|4)

I had my weekly meeting with Ketan this afternoon. Not much to talk about since I haven't done much this past week with everything else that's been going on. But an interesting idea came up while we were talking about measuring quality with different frame rates. Originally I was wondering whether or not there would be a problem dumping 30fps to disk. But Ketan started wondering how things would look if we dumped frames at varying frame rates, and how that would affect the quality.

Anyway, I have to start setting up the test environment for video capture in the DiRT lab. We've got two machines Vidcap3 and Vidcap4 attached to cameras in Sitterson 155. I'm going to install Windows and FreeBSD on these as dual-boot systems. We need the FreeBSD side for dummynet and the Windows side for both the VQM Software and because David Stotts wants to run some experiments and right now it looks like he needs Windows for them.

Posted by josuah at 10:41 PM UTC+00:00 | Comments (0) | TrackBack

September 23, 2002

Presenting Chord at TONIC

Ketan's asked me to present Chord this Friday when TONIC will be held at UNC. The Chord project was developed by Ion Stoica, et al. and is described in Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications. (Note that this version of the paper is different from the one listed in CiteSeer and on the TONIC home page--I like this version better.)

Since I need to make a few PowerPoint slides to describe Chord, I just downloaded OpenOffice. Unfortunately, OpenOffice is pretty large and the new file server hard disk (WD1200JB) I ordered has not arrived yet, so I had to install OpenOffice on Orlando instead of Binibik. Since the WD1200JB will actually end up being larger than my current 80GB backup drive, sometime in the future I will have to purchase a new backup drive around 320GB in size. If you're interested in hard disk information, visit StorageReview.com. They have a WD1200JB review, which places this drive on the leaderboard for a 7200RPM IDE drive.

Posted by josuah at 11:04 PM UTC+00:00 | Comments (2) | TrackBack

September 22, 2002

Continue as Planned

Ketan and I talked a little bit yesterday while going to the TONIC meeting at Duke. Since it's quite possible that the VQM Software will become available in the near future, and we still need to setup our research infrastructure, we're going to go ahead as planned with the hope the software will be made available by the time we are ready to use it.

The first thing I am going to do is to install Open Mash vic on Effie and transmit test video through dummynet to make sure I fully understand how to use dummynet. I've already read through the simple documentation and it's basically delay and drop targets in ipfw. If I use dummynet correctly, I should see horrible video on a vic receiver.

Posted by josuah at 1:06 AM UTC+00:00 | Comments (0) | TrackBack

September 19, 2002

VQM Software Part III

I got a response from Stephen Wolf just now and unfortunately he can't say with any degree of certainty what sort of delay we're looking at regarding the no-cost evaluation agreement for the VQM Software. It's up to the lawyers, however long they will take. When the software is available, he will get back to me. But we may have to pursue alternative means of video quality evaluation for now. It would be bad if we worked on something for a while and then the VQM Software was made available, though.

Posted by josuah at 9:08 PM UTC+00:00 | Comments (0) | TrackBack

September 18, 2002

VQM Software Part II

At Ketan's suggestion, I fired off another email to Stephen Wolf asking when he thinks the evaluation agreement for the VQM Software will be ready. If the lawyers are going to take too long, then we will probably have to look at an alternative evaluation solution.

In the meantime, I will try to get familiar with dummynet since we will probably use it to simulate end-user links during the video quality evaluation process on the Oracle.

Posted by josuah at 8:38 PM UTC+00:00 | Comments (0) | TrackBack

VQM Software (pending)

Yesterday I emailed Stephen Wolf at the Institute for Telecommunication Sciences about getting access to their VQM Software for my video quality prediction research. I received a response this morning, and unfortunately their lawyers are still working on the no-cost evaluation agreement which we need to sign before we can download the software. Stephen says they should be done soon, so hopefully this won't present too much of a delay. I need to get the cameras in 150 Sitterson ready for capture first anyway, but we do need the VQM Software in order to conduct the actual experiments.

Posted by josuah at 4:41 PM UTC+00:00 | Comments (0) | TrackBack

September 17, 2002

Non-DMA Video Digitizing

Turns out the reason VDSetPlayThruDestination() was returning -2208 (noDMA) is because when using an external device like a FireWire camera, you can't do DMA. Obviously. VDSetPlayThruDestination() should be used for video capture cards on the PCI bus, but a different sequence of video digitizer API calls are needed for interfacing with USB and FireWire cameras.

I found a post in Apple's quicktime-api list archives by Steve Sisak from back on June 16, 2000 which explains this and also provides the correct video digitizer API call sequence. These are API calls I hadn't looked at before, so I'm reading through them as I try to add them to the Mac OS X video capture code in Open Mash. I'm thinking about the best way to do this is because the frames need to be grabbed asynchronously; vic shouldn't eat up CPU cycles and also let the user do things while grabbing a frame which means I can't just sleep until the frame is grabbed. I think it will work if VDCompressOneFrameAsync() is called once when starting the capture device and recalling it after each successful frame grab.

Posted by josuah at 2:29 AM UTC+00:00 | Comments (0) | TrackBack

September 16, 2002

NCNI at MCNC

Ketan and I went to the NCNI meeting this morning at MCNC. Met Tyler Johnson and a few other people. We presented our suggestions for implementing this video quality prediction system.

One thing which was suggested was to pipe reconstructed video through a video card's TV-out port, then re-digitize it for analysis. The reasoning being that this would let you run the oracle on any codec or application without needing source code to dump the frames to disk. On one hand, you'd essentially be taking the reconstructed video and repurposing it to NTSC before analysis, which would introduce additional artifacts. However, it is true that a number of participants will be displaying received video on television or projector systems anyway.

First step now is to try and get some YUV data to run through the free VQM Software and get some idea of how long it takes to actually run a simulation. Ketan and I were thinking it would be possible to grab YUV frames out of Open Mash vic quite easily.

At the moment, I'm installing FreeBSD on Effie.

Posted by josuah at 7:13 PM UTC+00:00 | Comments (0) | TrackBack

September 15, 2002

VDSetPlayThruDestination -2008

Well, I've put all the basic video digitizer commands into the Mac OS X video capture class for Open Mash. The initial setup appears to be working, but I'm getting a -2008 error message when I try to set the video digitizer's capture buffer via VDSetPlayThruDestination. This indicates that the location of the destination PixMap is wrong for what I'm trying to do. I'll have to dig around and figure out why. I recall seeing somewhere what situations would return -2008 (noDMA) but I can't find where that is anymore. So many Inside Macintosh PDFs open.

Posted by josuah at 1:51 AM UTC+00:00 | Comments (0) | TrackBack

September 13, 2002

Finished Imaging With QuickDraw

Just finished reading through the parts of Imaging With QuickDraw that I think are relevant to the data structures and subroutines I will need to use with the Video Digitizers to access the actual frame buffer of a captured frame. It's still a bit unclear to me how to actually use the Video Digitizer API in a correct sequence of commands that will let me setup the frame buffers, for example. Hopefully that will be fairly straight-forward with minimal API calls.

Posted by josuah at 7:15 PM UTC+00:00 | Comments (0) | TrackBack

September 11, 2002

Preparing for NCNI

I met with Ketan just a little while ago and we talked about the NCNI meeting that's coming up this Monday at 9am. We need to present our findings and propose a solution for predicting video quality based on network measurements. I need to write up a one or two page hand-out for the meeting that does this.

Ketan agrees with me that the best video quality model (VQM) to go with is the one detailed in NTIA Report 02-392. This model is the result of years of research by the Institute for Telecommunication Sciences and has several advantages over the other two models I came across:

  • The model is fully specified in the 120 page report. All of the mathematics and formulas are described in detail.
  • The document provides specific quality evalution formulas for broadcast and video conferencing video streams.
  • VQM Software is available free for research purposes.

The other half involves implementing what Ketan has named the <music type="spooky">Oracle</music>. This is the deployment architecture he thinks would make a killer impact on the use of our research. Basically, what this would do is let end-users use a very lightweight application to make link measurements between themselves. They can then send this information to the <hands movement="wavy">Oracle</hands> along with information about the type of video they will be transmitting, and then receive a prediction that says if the video quality will be good or bad. The <sound source="thunder">Oracle</sound> machine would be a central server in our control which would use the provided link data to determine video quality, based on a specific test stream(s). One way to do this would be to actually transmit the video to itself while conforming to the provided network data and running the reconstructed video through the VQM software.

Posted by josuah at 8:02 PM UTC+00:00 | Comments (0) | TrackBack

Click to Meet Architecture

Yesterday John Lachance, a Systems Engineer for First Virtual Communications called me back to talk about how their Click to Meet product works. I had contacted FVC about what H.263+ annexes are used in Click to Meet, and was not sure if the response I received from Steve Barth also applied to their web clients. John was able to answer this for me.

FVC uses a client/server architecture for their Click to Meet product family. How it works is that you install their server software on a single machine which is your network's H.323 point of access. All of your video conferencing data will go through this server. End-users point Internet Explorer (5.5+) at the conference and download an ActiveX Control which will talk to the Click to Meet server using a version of the Cu-SeeMe protocol.

So basically the answer is that the Click to Meet web clients will end up using the F and P.5 annexes used by the server.

Posted by josuah at 7:40 PM UTC+00:00 | Comments (0) | TrackBack

September 10, 2002

On QuickDraw

The past few days I've been trying to read through all the documentation related to QuickTime and Video Digitizers. I can get through the Video Digitizer API right now, but there are too many parts of it which I don't understand since it's a low-level API and there are no examples for using Video Digitizers to capture images. So right now I'm concentrating on QuickDraw, which is the legacy drawing API used by QuickTime.

Posted by josuah at 12:17 PM UTC+00:00 | Comments (0) | TrackBack

September 6, 2002

VQM Evaluations

I've read through the three visual quality model (VQM) papers I mentioned before, in more detail this time, to try and figure out exactly what needs to be measured and computed, and also how useful each model will be.

The Perceptual Image Distortion model doesn't appear to be particularly useful. From what I gather, they developed a model which closely matches empirical data gathered by other researchers, but did not actually conduct any human perceptual studies on images to determine the real-world usefulness of this model. Nor is there really any rating scale which could be applied to describe an image's perceived quality, unless you just want to throw around their calculated numbers. This model doesn't seem very promising, and they do state in their conclusion that future work needs to be done to correlate their model with some perceived quality.

The JNDmetrix, however, conducted real-person studies with trained image analysts and other people to compare how well JND values correlate to perceived quality. And also how badly root-mean-error fairs in the same tests. In short, the JNDmetrix VQM works extremely well at providing easy to understand ratings of an image's quality. They've won an Emmy for their work and the JNDmetrix is in use by the broadcast industry. Unfortunately, their description of how the JND values are computed is described without hard numbers of formulas. So I'll either need to contact them to see if they are willing to reveal their algorithms, or implement my own interpretation of what is discussed in their paper. Or purchase their JNDmetrix products. I'm not very optimistic about them giving me the algorithms, since they sell JNDmetrix through their products. At least there was no mention of a patent on this technology.

The third VQM is the one documented in NTIA Report 02-392. This paper is extremely detailed and provides all of the information you need to implement and use their model, including suggested parameter values, calculation descriptions, and equations specific to the type of video sequence involved. Their model also applies specifically to video sequences, and not individual images. All the equations appear to be linear, which is a good thing. It is specifically stated that this information is available for anyone to use anywhere for whatever. Like with the JNDmetrix, this model also has a specific number which can be used to compare the quality of two video sequences. Right now, this VQM looks like the best choice.

None of these models specifically tie any sort of network characteristics to the quality calculation, or deal with what type of artifacts were detected in the reconstructed frames. So, I'm guessing we'll have to go with actual calculations on sample video streams in a simulated network.

Posted by josuah at 10:32 PM UTC+00:00 | Comments (0) | TrackBack

DCT Quality for NCNI (Tyler Johnson)

I actually started thinking about this right off the bat, because I really don't like the idea of having to run actual video or even fake data with video characteristics every single time an end-user wants to test their connection. After all, what's the difference between that and just firing up a video conferencing session anyway? And it also requires both end-points to cooperate and maybe install something. I happen to be one of the few computer geeks who places a really high value on end-user convenience and usability. (I also use Mac OS X as my workstation of choice, which I guess reinforces that.)

What I am currently considering as the best option is to run tests on a simulated network we have control over (I understand there are tools for simulating network characteristics on FreeBSD routers/gateways) for some different video stream types (e.g. H.263+ with common annexes, MPEG-2, H.263, specific tools) and using that to generate a model which represents "quality" based on the network characteristics. Either that, or use probability and the known effects of loss, delay, or jitter on specific bits of data to generate a probabilistic model. The latter is more complex though.

Then, given those models, the end-user could simply run some simple ping-type tests with different packet sizes to figure out the current characteristics of their connection and then match that up with the pre-computed model associated with their codec/application.

Ketan suggested an interesting approach to this where an end-user could point their web browser to a special server that would be able to run these special ping-type tests. This would not require an end-user to even know what pinging means. Of course, this only observes the connection between the end-user and the special server, not between the two end-users who will be communicating. But it's better than nothing. It would also be possible to conduct this test several times each day over a few weeks to generate aggregate characteristics for a specific end-user and let them know what they can expect, average-case (maybe also best- and worst-case).

On Friday, September 6, 2002, at 07:14 AM, Tyler Johnson wrote:

Wesley I did a cursory review of the information you sent out regarding evaluating quality for compressed video. I suspect we are going to face an architectural decision and I would like to get you thinking about it now.

I see two possible approaches to predicting whether a particular network link will perform well for compressed video. The first is to relate network performance to perceptual quality. So we would say, for example, 5% packet loss causes sufficient tiling at target data rates/codecs that most people would regard as unusable. This would be an extremely convenient thing to do. Perhaps there would be a plug-in architecture that let's you insert your own rating scales. The down side is of course that every codec implementation is different and deals with packet loss, jitter, buffering in ways that could render those correllations meaningless.

The other approach would be to actually send sample video and analyze it on the other end. On the surface, this would seem to ensure dead accurate results. The downside is that the tool would be bigger with more components. I also wonder if we generate sample video, do we have the same problem as approach #1 in terms of different codecs behaving differently than the sample.

I would like for you to think about these issues and what the tradeoffs might be to going down either path. Perhaps there is some other approach?

Posted by josuah at 8:06 PM UTC+00:00 | Comments (0) | TrackBack

Click to Meet Annexes

Steven Barth from First Virtual Communications was good enough to get back to me with which annexes their conference server supports:

  • Annex F: Advanced Prediction Mode (reduces artifacts)
  • Annex P.5 Factor of 4 resampling (allows resizing)

So, between Click to Meet and PolyCom, Annex F and P.5 are shared.

Posted by josuah at 6:14 PM UTC+00:00 | Comments (0) | TrackBack

Searching for H.263

I tried to find some information on the H.263+ options commonly used in video conferencing software. The most popular application looks like Microsoft NetMeeting, no doubt in large part because it's bundled with every version of Windows. Cu-SeeMe doesn't look like it officially exists anymore, but I did find Click to Meet by First Virtual Communications. It looks like FVC used to be in charge of Cu-SeeMe but now sells Click to Meet. The free iVisit application also looks pretty popular, based on their discussion forum. It was developed by Tim Dorcey, the creator of Cu-SeeMe. Unfortunately, I could not find out anything about what H.263+ options these applications support. I've emailed FVC and the iVisit people asking if they would be able to provide me with this information. I'm not even sure if NetMeeting implements H.263+, since everything only refers to H.263.

But there's also Polycom, which I know about from my work at the Berkeley Multimedia Research Center. They are good enough to make their technology white papers available, and list the H.263+ annexes and options implemented in their iPower video codec:

  • Annex F: Advanced Prediction
  • Annex J: Deblocking
  • Annex N: Reference Picture Selection
  • Annex P: Reference Picture resampling by a factor of 4
  • Appendix I: Error Tracking

I also found two more papers. The first is provides a bunch of statistical models for variable-bit-rate video. This might be useful later on when we need to develop our own video stream models for traffic analysis. The second compares PSNR video quality of H.263 with different packet-loss protection schemes. Interesting that they found H.263 PSNR decreases linearly as packet loss increases.

Posted by josuah at 2:43 AM UTC+00:00 | Comments (0) | TrackBack

August 29, 2002

Video Quality Models

So far I've skimmed over the following Video Quality Models (VQMs). Each model has its own metrics and measurement techniques that apply to specific types of video. I'm going to enumerate these models and try to explain a little bit about how they work and what types of video are applicable.

The Institute for Telecommunications Sciences has its own VQM developed by the System Performance Standards Group. Information about this model is available on their Video Quality Research web page. It's documented in NTIA Report 02-392. This model compares a reconstructed video stream against the original by looking at luminance, gain, spatial and temporal distortions, non-linear scaling and clipping, blocking, and blurring. It is specifically applicable to television (e.g. MPEG-2) and video-conferencing streams (e.g. MPEG-4, H.263), as well as general purpose video streams.

The Sarnoff JNDmetrix VQM works differently, but at first glance appears to be less complex. It may, however, be computationally more expensive because it performs a gaussian blur and computes non-linear "busyness" values. Basically, this model looks at local distortions between the reconstructed and original images or video frames and then weights those distortions based on the busyness of the local area. The idea being that people don't notice errors in areas that are visually busy. Since this model applies to single images regardless of encoding, it is applicable to any video codec.

The third VQM, Perceptual Image Distortion, is described in a paper by Patrick C. Teo and David J. Heeger. This model passes an image through a linear filter identifying spatial and orientation characteristics. Normalized local-energy calculations are used to identify distortions in the transformed image. Since this calculation is also done only on images, it should be applicable to any video codec.

Posted by josuah at 4:31 PM UTC+00:00 | Comments (0) | TrackBack

August 28, 2002

QuickTime Documentation

Apple, although often accused of bad developer support, is still loads better than Microsoft. At least in my opinion. I downloaded a few QuickTime developer documents from the Apple Developer Connection to read up on Video Digitizers. All of the Inside Macintosh books are available for free download in PDF format. And a lot of it (if not all of it) is available for online browsing in HTML. I happen to have a copy of them on CD-ROM from several years ago, but I don't use it.

Their free suite of development tools is also excellent. Project Builder is the best IDE I've ever used, both from a functional and user-interface standpoint. Interface Builder is also really cool. Symantec C++ (which I guess doesn't exist anymore), Metrowerks CodeWarrior, and Forte for Java (now Sun ONE Studio). None of those are as good.

So I've read up on using Video Digitizers in Inside Macintosh: QuickTime Components. The API seems straight-forward enough, but as I learned when adding audio support to vat, I'm going to run into problems I didn't know about. Or maybe that phase is over since I've already tried to get vic to capture video a bunch of times using Sequence Grabbers.

Posted by josuah at 11:53 PM UTC+00:00 | Comments (0) | TrackBack

Perceptual Metric Proposals

Just finished up my meeting with Ketan. Showed him what I had so far in terms of the visual and network metrics papers and web sites. We talked about what's coming up and where to go from here.

On September 16th, we'll be going to the NCNI meeting to present a few different proposals as to how we can go about developing a tool to predict video quality based on network measurements. So between now and then, my job is to find perceptual image quality models which I think work well and also apply to H.263 and MPEG-2. Given these models, I need to figure out what sort of network measurements will let us predict the final video quality.

I've already got the Sarnoff JND model which I do think works well. In particular, this model applies to DCT images in general, rather than video streams. So it should work for both H.263 and MPEG-2. We should be able to figure out the probabilities for different levels of image reconstruction quality given some network characteristics and traffic statistics. And then applying the JND model to the reconstructed images will give us probabilities for different video qualities under those network conditions.

So, other than looking for some additional perceptual image quality models, I also need to look up H.263 implementations to see which are the most commonly used features. Different features will affect the traffic characteristics and image reconstruction. There might be some papers which deal specifically with H.263.

One thing which will come later on is figuring out how we can generate traffic with video characteristics to gather link statistics and measurements. Ketan suggested a Java applet which generated the traffic, but we both have doubts as to how well that would work. A Java applet is not going to be very reliable for precise time measurements. But it would be easier to implement than Ketan's other suggestion. If we point a web browser at our own server which is modified to send back data mimicking video traffic, we can make measurements at the TCP/IP layer. I think this approach is both more accurate and more elegant from a user's point of view.

Posted by josuah at 7:46 PM UTC+00:00 | Comments (0) | TrackBack

August 27, 2002

Prioritized Papers

In preparation for my meeting with Ketan tomorrow afternoon, I've gone through the 25 or so papers I found so far and prioritized them. I placed them into four categories: video metrics, network metrics, video + network metrics, and miscellaneous. Under each category I seperated papers into high, medium, and low relevance. Luckily, most of the papers are of high relevance. However, only one paper fit into the video + network metrics category and it deals with diff-serv networks instead of general networks. The two papers I filed under miscellaneous are Steve Smoot's dissertation and a paper titled VoIP Performance Monitoring.

One of the papers pointed me at a traffic analysis tool called NetScope that was developed by AT&T Labs-Research. I found a paper describing this tool at Papers on Internet Traffic Analisys (yes, analysis was spelled wrong). There's some good information at both of these sites which I'll probably want to follow up on later. But I don't know if AT&T Labs-Research will actually let outsiders access their tools or research data. Their web site seems more overview than details; perhaps contacting some of the people there directly would be fruitful.

However, I still haven't really found anything that specifically correlates traffic characteristics with video quality. I suppose we'll have to generate models and run simulations and tests to determine this ourselves, given a specific video system. From scratch.

Posted by josuah at 10:40 PM UTC+00:00 | Comments (0) | TrackBack

August 25, 2002

CiteSeer Results

So I used CiteSeer and the references or bibliography sections of those papers I found earlier about perceptual video quality metrics and I now have about 25 papers in total. Some of them deal with techniques to measure video quality and some with measuring network traffic properties. I also tried to find papers specifically looking at the impact of network traffic properties on video quality, and while I think I found a few, this was something harder to come across.

Steve Smoot's dissertation is about maximizing video quality given bit-rate constraints, but just about everything he references deals with video signal processing and not the relationship between network traffic properties and video quality. A few of the references deal with how to measure perceptual video quality. So I didn't really think much of what he referenced would help that much. There were a couple which looked a little encouraging, but I wasn't able to actually find the papers themselves. And one or two are books, not papers. CiteSeer didn't have his paper in their database so I couldn't find papers which cited his paper. Maybe I'm missing something here, since Ketan specifically pointed me at Smoot's dissertation.

Some links which I think will be helpful to me later on are:

  1. MPEG-2 Overview by Dr. Gorry Fairhurst.
  2. A Beginner's Guide for MPEG-2 Standard by Victor Lo.
  3. MPEG FAQs
  4. MPEG.ORG

I need to read up on MPEG so I know more about the codec.

Posted by josuah at 12:42 AM UTC+00:00 | Comments (0) | TrackBack

August 23, 2002

Got FireWire Cam

I got an ADS Pyro 1394 webcam from Herman Towles, a Senior Research Associate here at UNC Chapel Hill. This will let me finish up video capture for vic in Mash. I'm running it on my iBook with the 30-minute trial drivers from IOXperts.

So I connected it up and started poking around on Apple's developer site looking for those things I dumped into my QuickTime Video Capture Archive Note; particularly the Video Digitizer implementation. So I have a starting point there, but I need to read through the Video Digitizer documentation more.

Posted by josuah at 8:18 PM UTC+00:00 | Comments (0) | TrackBack

August 22, 2002

Perceptual Video Quality Metrics Papers

At Ketan's suggestion, I've started collecting papers and information about perceptual video quality metrics. These papers will give us a starting point for figuring out how to associate network statistics like delay and jitter with perceived video quality.

He pointed me at Steve Smoot's dissertation. I also found the Video Quality Research site of the System Performance Standards Group, part of the Institute for Telecommunication Sciences. They have a bunch of papers there dealing with the objective measurement of audio and video.

I also found a couple of papers about measuring perceived visual quality based on the Sarnoff Just-Noticeable Difference (JND) vision model. This model looks interesting because it does a better job of calculating perceived differences than a simple difference map or signal-to-noise calculations.

Now I'm going to use CiteSeer to find more papers related to those I mentioned above. I've got about ten right now that seem relevant at first glance, and CiteSeer will probably give me several more.

Posted by josuah at 7:10 PM UTC+00:00 | Comments (0) | TrackBack

July 2013
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      

Search