Monday, March 29, 2010

3D Revolution

I was singing “Good Morning” to my daughter the other day since she seems to love Broadway showtunes. The song is from “Singing in the Rain”, and stars Gene Kelly and Debbie Reynolds. It was the first “Mamma Mia” or “Movin’ Out” of its time. I say that because it was not written as a cohesive musical. It was simply a bunch of songs that were written and organized around a plot. The plot of the movie was about the transition from silent to talking movies. It was a revolutionary phase of media.

If you look back to those days, there were a lot of naysayers and pessimists about talking movies. Everyone was so used to silent movies that talking movies were too strange to be appealing. Also at the time, talking movies were more expensive to make and took longer to make. In the end, however, talking movies became mainstream and are now known as just “movies”.

Now we are at the beginning of another revolution. With James Cameron’s Avatar, three dimensional movies are coming into vogue. We should not forget that 3D movies were available three decades ago. However, the technology to create the 3D effect was still in its infancy and relegated to those awful red and blue paper glasses. Since that time the technology has advanced significantly. In fact, Avatar was filmed with proprietary equipment designed just for 3D movies. And the glasses have also advanced and are able to cycle rapidly to allow alternating images to each eye. All these advances means that the 3D image that you see if more realistic.

Of course, making a 3D movie is more expensive that making a regular movie. And not all theaters are able to show 3D movies. And only the newest of televisions may even have the capability of showing a 3D movie in your home. I may sound like a naysayer about 3D movies, and in part I am a little bit. Now that is not to say I think 3D movies will fail as they did in the 1980’s. With the technology today and the success of Avatar, there will certainly be a boom in 3D movies over the next five to ten years. But they will likely not take over the regular old 2D movie.

I say this for several reasons. First, with the rapidity that most movies end up on DVD, coupled with the likely increasing availability of 3D capable televisions, there will be less money to be made at the box office. I also do not believe that the difference will be made up on DVD sales. Thus 3D movies, which will always require far larger budgets to create, will run a far smaller profit margin overall. And given the difficulty to having a blockbuster movie, there will be times when a 3D movie flops, costing the studio even more money.

The second reason is that 3D movies offer a different experience overall to regular movies. The difference between silent movies and talking movies was sound. Not a great game changer unless you were hard of hearing. But there were always subtitles to fix that problem. The difference between 2D and 3D movies is obviously that third dimension. For I do not that it will be a tremendous issue since Avatar has broken box office records, but I personally find 3D movies difficult to watch. If they are too long I start to get a headache. It is not necessarily any different than a movie that is filmed using a handheld camcorder without image stabilization, but it is still a headache and makes it impossible for me to enjoy the movie. I do not know if this a common problem among moviegoers, but I would bet it happens a number of them.

The last reason that 3D movies will likely not take over is that not everything looks better or need to be filmed in 3D. Most if not all of your romantic comedies and love stories would look unimpressive in 3D. Can you imagine “When Harry Met Sally” in 3D? You did not give a very excited response I am sure. It is far more likely that video games go all 3D before movies do. That’s right. You heard it here first. Once those 3D televisions are more affordable the first 3D video games will show up in stores. Imagine Grand Theft Auto or Casualties of War in 3D. Or even Mario Kart. It’s a guaranteed winner.

Sunday, March 21, 2010

Moore's Law No More

Moore was the co-founder of Intel. In the 1960’s he wrote several papers on the topic of circuits and computing. From these papers emerged what would become Moore’s law. It began with the prediction that the number of transistors that could be placed on an integrated circuit would double every two years. Subsequently, many corollaries evolved, one stating that processing speed would also double every two years. This should mean, of course, that your computer, your smartphone, and your DVRs should work faster and faster as you upgrade over time, right?

Wrong.

In fact, what will more than likely happen is that while computers will be faster, the user experience will be more and more variable. There are two explanations for this—a technical reason and a intuitive reason. The technical reason tells us that processing speed is not the only factor that determines the speed of a computer. Speed is also dependent on the speed of the RAM, the speed of the hard drive, and the order in which calculations are preformed. This is why one the easiest way to speed up a sluggish computer is to increase the RAM available or get a faster spinning boot hard drive. Unfortunately, while RAM can follow Moore’s law and its corollaries, Hard drive speeds do not. This means that as processors continue to get faster, they will begin to be held back by hard drive seek times for data. Also, much software is not written for faster processors and so cannot fully take advantage of the speed. This means that calculations are done out of order or inefficiently, slowing the perceived speed of the program. In fact, while dual core processors have been out for some time, software truly taking advantage of both processors simultaneously have only recently become more common.

The intuitive reason is slightly different and what I believe more to be why computers do not seem much faster despite having faster processors. People’s wants always supersede their needs. There is no clearer example of this than the everyday occurrence of people spending lots of money on rock concerts or sports tickets or car upgrades than on healthcare. People want to be able to do bigger and showier things all the time. They want their games to have better graphics and more complex gameplay. They want to play ultra high definition videos. They want to run browsers with hundreds of add-ons. But in order to have these things more calculations per unit time need to be done by the computer. And faster computers allow that to be done. But if you have that kind of speed, would you create a program that only uses eight percent of that power? Of course not. You would use it plus more if you could. The problem with that is that operating system developers are thinking the same thing—more background tasks, fancier desktops, and more widgets. So a lot of the computer’s processing power is already used.

So you can see that as processing speed increases, OS needs increase parallel to it. But because new software is written to take advantage of faster processing power, it becomes, well, a power struggle. In a very simplistic sense, with increasing multitasking by today’s computers, available processor time diminishes exponentially for the patron user’s computer. After all, a software designer is testing his program on a computer running the most basic OS components that he needs so that the application can use all the processing power it requires. This then suggests that as computers get faster they will get slower.

Sunday, March 14, 2010

The Reality of Reality TV

I have picked up a reality game show. I know, after all my railing on The Biggest Loser, I turn around and willingly watch a reality game show. I say reality game show because it is about real people being themselves and playing games or competing for monetary prizes. In The Biggest Loser it is a quarter of a million dollars. In Survivor it is a million dollars. This is in comparison to true reality shows such as Real World or Keeping Up With The Kardashians, where there is no prize at the end of each show or season.

The show I have started watching is Chopped on the Food Network. The premise of the show as follows—chefs must create meals using a basketful of ingredients within a certain amount of time. Four plates need to be completed for each meal. There are three meals—appetizer, entrée, and dessert. After each meal the dishes are judged and one chef is “chopped”. You begin with four chefs making appetizers, then three chef making entrees, and then two chefs making dessert. The combination of ingredients required can vary widely from instant grits to sea urchin. And usually the combinations are bizarre, such as scallops, instant grits, tamarind, and snow peas. With that sort of premise, the show has to be interesting. And it is. But it is also frustrating after you have seen about ten episodes.

The problem with Chopped is that the rules have not been well defined. In the beginning, Ted Allen, the host of the show, would say that the given ingredients must all be used, but could be used a little or a lot. And the chef competitors did just that. After all, should you expect a dish with equal parts hummus, cocoa powder, and celery root? But the judges always criticized the chef contestants for not making all the ingredients share the stage equally. In one episode a contestant actually calls Ted Allen out on that point after getting berated by the judges. In the next episode the only stipulation mentioned is that all the given ingredients must be used. And in subsequent episodes the judges also no longer comment on equal use of the ingredients.

Another problem with Chopped stems from the players of the show. Aside from Ted Allen you have three chef judges and four chef contestants. So you have seven chefs in a small room. And chefs are not humble people. It is the nature of the profession. There is a hierarchy in every kitchen and you do not climb up that ladder by being humble. Put seven of these egos in the same room and have three of them criticize the other four and it becomes a very awkward situation. The only way to really defuse that unbearable clash of egos is to have an episode where the chef judges compete as contestants so they can fully understand what it is like for the contestants.

The third problem with Chopped was only realized after many episodes had already been filmed. Contestants only have twenty minutes to create an appetizer, and thirty minutes each for entrée and dessert. If you factor in prep time, that is not a lot of time for cooking and plating. So a lot of different techniques are impossible to use. Instead, you see over and over again the ragout sauce, the napoleon dessert, the salad with vinaigrette dressing. That means that after twenty episodes you watch the show more to see what ridiculous combination of ingredients they come up with next and how the chefs combine them. Every once in a while someone will be brave enough to try making gnocchi in half an hour, or granita, but that is not often. The show then loses its learning aspect very quickly.

I do enjoy the ridiculous ingredients and learning about new ones each time. I also know that I have never had a Napoleon and have absolutely no interest in ever trying one now thanks to Chopped.

Sunday, March 7, 2010

Right To Bear Arms

Colorado has been having a lot of problems lately. There was another school shooting, this time at a middle school, and by a random adult. You have probably heard the story on the news since Deer Creek Middle School is just a few miles away from the notorious Columbine High School, so you can bet no news source ignored that fact. And now there is controversy at Colorado State University, where the university was considering revoking the students’ right to carry a concealed weapon. CSU was one of a few universities in the country without a concealed weapons ban. And now the board has decided to ban all concealed weapons at the Fort Collins campus. The Pueblo campus students will still be able to carry concealed tasers and stun gun, however. Given the timing of this decision with the Deer Creek Middle School attack you have to wonder if there was much thought given to the ban.

I am not a pro-gun advocate. Nor am I a nonviolent sit-in pacifist. I just do not like stupidity and irrationality. So you must ask, did the Deer Creek Middle School shooting, in the wake of the Columbine High School massacre, prompt the CSU board to ban concealed weapons on campus? Or was it just a timely coincidence? The board professes that it has been researching the idea for at least a year. I think that it is possible that they have indeed been looking at concealed weapons ban for a year. But I also think that with the recent middle school attack, the public is very likely highly sensitized to the issue of guns. This sequence of events could easily lead to a political cascade that results in a concealed weapons ban at CSU, one of a few universities not to have a concealed weapons ban. A conspiracy theorist would say the Middle School attack was orchestrated to force a ban at CSU.

That is a lot of politics, though, and that is not what I am writing about. Let us look at the reasoning’s of both sides. It is plainly simple to see the logic of the anti-gun group. A gun increases the risk of murder. More guns equals high risk. Any risk above zero is unacceptable. It has been shown that having more guns in an area does not increase safety. In fact, it may even reduce safety. We already know that people tend to overestimate their skills out of conscious or subconscious arrogance. Over fifty percent of drivers think they are above average drivers. It would then logically follow that most gun toting people believe they have the restraint to not use the gun to solve a dispute. And then when they get into a dispute they would find out they were wrong. So if there were less guns out there over all, there must surely be less killing overall.

On the other side, however, you have a different viewpoint. And while some may believe that they have the restraint to keep from using a gun to settle disputes, it is not the main argument. The argument rests on the fact that Columbine and Deer Creek occurred. Both incidents as well as numerous other incidents around the country ended not because authorities came and ended the attack, but because the attackers committed suicide or were stopped by their victims. Now it is still true that more guns do not mean more safety, but the other part of that is that more guns do not mean more safety for everyone. But if authorities are unable to control the situation in time, people will not be worried about the public safety. They will be worried about their individual safety, something that is not well studied because most places have concealed weapons bans. I listen to the interviews and the students that support and carry concealed weapons all say they want their campus to be safe, exposing their ignorance to the current facts and their superficial attempt to use the greater good as an arguing point. They are simply looking out for their own personal safety in a scenario where safeguards for the public cannot help them. It might be selfish, but I gather they would rather be selfish than dead.

I do not know what the right answer is here. What this tells us is that our world has grown so large that we are seeing a shift back towards the individual and away from the crowd. With so many recent scandals of greed and the government unable to adequately protect or help the multitudes of Americans suffering collateral damage, it is little wonder why more people are adopting a more “personalized” approach to life.