Wednesday, June 13, 2007

Book Review: Programming the Universe

It is often hard to enjoy a science text, even if it is one of those pop-science books. However, when I was looking around the McGill bookstore this book just jumped out at me. Some might say it is because I am a comp sci and physics major looking to go into quantum computing, but I think it was divine intervention: the sort of divine intervention that is only possible from the power of a computing universe.

Seth Lloyd makes the obscurity of quantum computing very accessible in ‘Programming the Universe’. In its pure form the book will not teach you much about anything, but if you have some basic knowledge in computer science and physics and realize some of the implications of the computational paradigm then the book is really stunning. It kept me captivated as I read almost all of it (except for a little of the start that I read in the bookstore) on the eight hour bus ride to New York.

Seth Lloyd is a professor of mechanics at MIT and likes to be known as a quantum masseur. He finds ways to convince atoms to compute for him and is credited with making the first plausible quantum computer model in 1993. In the book he presents his new way to look at the physical world; in terms of information and computation. He rewrites the second law of thermodynamics to deal with information as opposed to energy, and with that law in hand, delves into the basics of quantum computing and qubits. He does not go too deeply into the inner-workings of quantum computing but instead expands it to the universal stage. He discusses the Universe as one huge quantum computer computing itself. To finish off, he discusses the informational revolutions that have happened since the Big Bang and talks about the place of humans in the picture. Through out the book, Lloyd keeps a good sense of humor to avoid making the material dry. My only issue with ‘Programming the Universe’ was that at times it was far too dumbed-down and spent too much time talking about the obvious or simple. In general, it is a very good read and I recommend it.

Labels: , ,

Wednesday, February 28, 2007

Beauty Function

Three computer scientists from Tel Aviv University have developed a new tool for retouching images. If red-eye reduction was impressive, then this is stunning. The software, appropriately dubbed “Beauty Function”, takes an image of your face, calculates your current facial proportions and proposes a more optimal configuration, displaying the associated image. In effect, it makes you look more beautiful (about 79% of the time, according to the developers’ data). The developers provide some sample pictures that have been transformed. The pictures are probably the best results of the program, and some are not overly impressive, however it is a very good work for an automatic algorithm.

The definition of beauty and ratios associated with it were experimentally derived. The scientists surveyed 300 men and women, asking them to rank pictures’ on an attractiveness scale of 1-7. The scores were tabulated and linked to the ratios of various facial features (such as eye size, facial shape, etc). Around 250 measurement points were considered when developing the algorithm. In the end the scientists came up with a mathematical function to transform an input ratio measurement into a more optimal configuration. The result is an image with a more beautiful face than the original that carries the specific features of, and can be identified as, the original person.

The developers see the biggest application in commercial products such as Photoshop and digital camera software. They hope that this facial modification catches on like red-eye reduction and becomes a common function used by both amateur and professional photographers.

Is digital face modification justified?

The developers try to justify the use of facial modification by saying “if magazines do it, why can’t we?” On the one hand, that is an acceptable argument. On the other hand, we are taking a bad principal and expanding it to everyone. Its nice to see a slightly prettier Jennifer Lopez or Christina Aguilera on a magazine cover, but how far will it go? At first we modify images, then learn how to alter video, soon we will be demanding virtual reality goggles that make everyone look prettier. A grand concept, but I think we should leave that to bear.

More useful applications

In my opinion a more useful application for this sort of technology is facial recognition and indexing for image search. This technology already exists in some forms. I also think using the technology in plastic surgeons offices (as the designers’ suggest) is appropriate too. I doubt the tech (in this form) will catch on widely in mainstream digital camera. It will most likely end up as one of the countless features in Photoshop and hopefully GIMP will quickly follow with an open-source version.

The eye of the beholder

One of the developers, Cohen-Or, states: “Beauty is not in the eye of the beholder. Beauty is merely a function of mathematical distances and ratios”. Is this really true? I agree until a certain point. Just like the preferred 7:10 ratio that men seek in women, I believe there are certain ratios and lengths in the face that a human looks for. These ratios are present to reflect a healthy individual and for a person to better pick an optimal mate. In the optimal selection though, I think a lot depends on the beholder. I don’t mean in the cheese classical sense.

I believe there is a more scientific reason behind “they eye of the beholder.” For the sense of smell, some experiments have shown that humans have a preference for certain pheromone that best match an optimal genetic match. For kissing, some studies have show similar results, implying that a kiss “tastes” better from a more compatible genetic match. I think the same applies to faces. Your phenotype is defiantly expressed in your face (that’s why you look like your parents), so why wouldn’t you subconsciously evaluate a person phenotype or genotype when looking at them? This means there are formulas, but the formulas vary between different genotypes. Thus a perfect “Beauty Function” can not be created.

The overlap that many people share when searching for a healthy and compatible face can be modeled. This model can take us closer to beauty but such a process can never hope to reach true beauty. Once you get beyond a certain basic template the individual compatibility factors take over and the highest end of beauty is in the eye of the beholder.

Conclusion

In the end I think this technology will mostly end up being used by dating and porn sites. Both need to quickly and cheaply modify mediocre picture. Hopefully the system will expand to more useful applications, such as good search.

Currently, the developers have not offered a version to play around. They promised a web-app version of the program since August 16th, but as off yet it has not been delivered. The only things we have access to is the sample pictures and demo video.

On a personal note: Wow, I was interrupted by two fire alarms while typing this post. As far as I understand the cause was a fault in the system. Usually my residence gets hit by a block of 2 or 3 fire alarms in a night. There was a real (minor) fire earlier today on the 4th floor (when I was in class), so hopefully this past fire alarm was the last of the night.

Labels: , ,

Monday, November 07, 2005

Rotting Our Analytical Minds

Over this past weekend, I downloaded a couple of the Visual Studio 2005 Betas from Microsoft. I am trying to diversify my vocabulary, language base and thought processes in programming. I decided to finish learning C++ and learn J# so I downloaded those two, as well as the Web development tool. I have looked briefly into Visual C++ and J#, but started of mostly in Web development, working on the browser based Wevonger program to work along the same lines as the Delphi written Devonger I have been working on recently. The experience of working in 3 different languages simultaneously opened me up to a lot of thought and arguments in which is better. I work in two radically different assisted languages: Pascal in Delphi and JavaScript/HTML/CSS in Web Development. On the side, for my blog work I work in pure untouched code. The three different programming experiences have really raised questions on which are the most effective.
In assisted languages I can make much more complex programs, much faster. In pure Code I am much more efficient from a resources point of view, as well as the integrity and full knowledge of my code. I came across similar dilemmas when I was experimenting with command line Python, but I only used Python for relatively useless things before looking into GUI interfaces (assistance). In the end I can not come to a good decision over which is more effective. I think it is essential for a program to know both and be adaptable. I think a programmer is marker by his skill of thinking in different ways and his ability to learn quickly, and not his knowledge of syntax. Even with my native tongue of Delphi I still frequent the F1 (help) key, because I see memorization of syntax as useless.
In my quest to enlighten myself on which sort of programming was more appropriate (and to avoid actual work) I scoured the World Wide Web. I soon came across an article with a hopeful title by Chrarles Petzold. I quickly turned to the man with 30 years of coding and programming for advice. After attentively reading the 20 page article and laughing at all the little jokes, only a man who spent 30 years coding could write, I started to form a better opinion of proper programming. Petzold had specialized in Windows Forms and C programming. Now he was over to C# and new his Pascal and C++, but his mother tongue was different from mine. However, I was able to understand his words and feel the feeling that tore him apart. I was able to understand and synthesized with the uncertainty of which was better: human code or computer assisted code. In the end both of us came to few conclusions. The only thing that was really established for me was the true addictiveness of the shortcuts computer assistance gives us.
I recalled the days when I had to make a second form for the first time. Before then Delphi always manufactured my first form for me and when I had to create another one from scratch I was stuck scratching my noggin. In about five minutes I had the code down, but the main thing was that hesitation at first and the lack of proper education in form creation. I realized that having Delphi create my form was bad for my programming mind, and yet I would never get rid of it.
As Petzold wrote in his article, “Does Visual Studio Rot the Mind”:
It is very common for us to say about a piece of consumer technology that “we didn’t know how much we needed it until we had it,” and much of this technology seems targeted not to satisfy a particular need, but to get us hooked on something else we never knew we needed; not to make our lives better, but to tempt us with another designer drug. “I can’t live without my ___________” and you can fill in the blank. This week, I think, it’s the video iPod.

Technology has become a drug that we get hooked on and can never get off. Not only does this apply to IntelliSense in Visual Studio, or auto complete in Delphi and the VS predecessors. The curse of technological addiction also applies to non programmer applications, like MSN, WinAmp…. Spell check. The first two are constantly running on my computer and I do not know what I would do without them. I can not imagine going through hundreds of CDs or records in my player or even listening to the same artist twice in a row. My phone usage is a bare minimum that only sees itself used when someone’s internet goes down, when my computer illiterate mother calls, or when I am just too confused to type. My spelling is probably on the level of a grade school child from around a century ago, because I can always just click F7 and have all the spelling mistakes in this post fixed automatically.
Technology has become both a dream and a curse for not only programmers, but all of its users. Soon we will be hovering around our school hallways saying emotionless ‘lol’s to each other.

Labels: ,

Wednesday, October 19, 2005

IP Chat

Introduction
Today during our time in the drafting lab, my friends and I tried to establish a way to chat with each other without talking. Dave and Oliver lead the initiative and quickly discovered that most of the standard chat programs (Google Talk, MSN, etc.) were blocked. They tried Web MSN, but that was blocked as well. We found a LAN based service and started using that after a while. However the user interface was horrible and we decided the tool was too much of a pain to use. Oliver and I instantly started working on ways to make our own service. We both came to the decision of using the shared directory on our LAN network (the “drop box”) to make a text based chatting service. Oliver started working on it right away, I decided I would have a look at the DLL for the LAN service and see if I could savage something more advanced. Through more thought I was able to come up with an elaborate new way of establishing a chat service. After thinking for five periods I came up with an idea of a service based on many different IPs.

Outline
The basic idea behind my service is making it virtually independent from one main server. Through independence it becomes impossible to block by organizations and does not require one server being put under strain. The system depends on many separate nexuses hosted by different people with the guidelines of one main server. The program would consist of two parts: a chat client and a hosting client. Through chat anyone would be able to connect and through hosting anyone would be able to become a nexus.
Chat client:
The chat client and chat part of the system would still be based on log on. “Key nexuses” would hold the usernames and passwords of different people. Their profile itself (showing personal information, friend’s lists and all the other nexuses the person is connected to) would be essential but could be hosted from anywhere (even the person’s own C: drive). To make naming easier, you could register your own computer with a “key nexuses” and it would attribute a name to your machine (since most home users have varying IP addresses). In other words you could register with someone like “importantserver.com” or you could make your own domain name like: “importantserver.com/user”. This way of making your own domain name would allow for more names as well as keeping password information on a trusted computer.
Each chat client and their profile (which could consist of more than one account) will communicate to other users through direct IP links. When you click on a user’s name a message will be sent to one of the nexuses you share in common and you will be provided with the users current IP (it will be hidden deep inside the client so it is not too easy to retrieve). When you type up a message and send it, it will go straight to that person’s computer without having to take a detour to servers like msn.com or something. This direct link would make the connection slightly faster and much more efficient for such VoIP and Video conferencing. When several users are online each client decides which way of sending information will be more effective (through Nexus or to each individual IP directly) and takes the best course of action.
Hosting Client:
The hosting client will allow any individual to make their personal computer or server into a nexus. If a person has a server with a static IP, then they can become a full fledges nexus that can give out its own usernames (although people with usernames from other servers will still be able to connect). If the host-to-be does not have their own domain name, then they will have to find a nexuses that can provide them with a name of sorts. This local and easy to set up server system, will allow groups of friends to have their own chat server and for sites to host effective chat rooms. This might seem a lot like IRC, but the big difference is that there is no site that you have to send all information through; you just need a site to get other people’s IPs from. Each hosting client will also allow the option of being free-access or password protected.

String Connecting
With all these servers being around it could become easy to get lost and confused. A special system is included in my plan to account for confusion. If you want to find some nexuses, (lets say if your profile got lost) you just call a friend by their IP and you will be automatically given every nexus connection they have and allow others to view. The same will happen whenever you engage in a chat. For every nexuses on your list of connections you will be given an option of making it viewable by others or not. Whenever you connect to someone their client will automatically tell your client all the other viewable nexuses they are connected to. This system will allow you to stay connected to someone even one (or more) servers go down. As long as you can find one server in common you can connect (or as long as you do not close the window and keep a direct link). You will also be able to block users or nexuses or disconnect from nexuses as you please. If you decide a certain server is giving out your IP to people that really should not have it then you just disconnect from it and your client will not send your IP to them anymore.

Conclusion
The biggest thing going for my system is that it is completely decentralized. Decentralization makes the system almost impossible to shut down. Instead of running through one server like we do now a more internet type system is applied. My IP Chat program follows the philosophy that founded the internet: a serious of separate networks (nexuses) interconnected between each other. I will start preliminary work on this project sometime next week and see how far I can get before I hit a dead end. If you have any questions, advice or other comments, feel free to post them.

Labels: ,

Sunday, October 16, 2005

Google OS

Introduction
For a while now, I have been talking to people like Kumaran and posting on forums about my ideas for Google OS. I am not a big fan of some of the big ideas out their regarding Google running a supper cluster OS and you having an account on it. Some people such as Skrenta phrase their arguments well:

Google is a company that has built a single very large, custom computer. It's running their own cluster operating system. They make their big computer even bigger and faster each month, while lowering the cost of CPU cycles. It's looking more like a general purpose platform than a cluster optimized for a single application.
While competitors are targeting the individual applications Google has deployed, Google is building a massive, general purpose computing platform for web-scale programming.


However, I still disagree. I believe Google will create a free, single PC based system that is linked to their servers. Centralization does not float my boat, and in my opinion it does not float Google’s boat either. So far everything I have seen from Google has been very much decentralized. They run approximately 0.1 million servers in different parts of the world in small server farms to optimize searching and keep themselves decentralized. I understand the basing for using all the Google servers as one super computer:

The Google server farm constitutes one of the most powerful supercomputers in the world. At 126-316 teraflops, it can perform at over one third the speed of the Blue Gene supercomputer, which is the most powerful computing machine available to humanity. (source)


This does lend itself toward creating some sort of centralized system. But when you think about all the people that would be part of the system and all the process running on the system, such a thing does not seem feasible. In my opinion it is more plausible to have each computer run a Google OS and be connected to the internet and the special Google Network. Each computer will be responsible for its standardized functions, but all the leftover processing power would be given over to Google to with as they please (much like the program Folding@Home). The connection to a single network or Google users would also allow Google to study their surfing patters (as they already do with their net mapping), but since they would control the Google OS network, they would be able to alter the connections to optimize the speed. Google would be able to become a road planner for its own pseudo-net.

Free
Google would have to make its Google OS free because they have to, and because they can.
Have to:
Google would have to provide a free OS in order to compete well on the personal OS market. Right now there are only two major providers of paid-for-OS on the market and only one of them can boast that it makes money from its OS. Both Microsoft and Apple provide their own priced OS, however only Microsoft provides it successfully. Apple OS only exists because that is the only OS available on the computers Apple makes. Microsoft on the other hand, does not manufacture computers and manages to make money on the OS alone. Microsoft’s monopoly over the market, forces other operating systems (such as Linux) to be free or not exist at all. Google OS would need to be much like Linux and MacOS to be successful. However, unlike Linux, Google would have no need to be open source, since they already has a wonderful team of computer scientists working for them. Unlike Apple, Google would have to be non-proprietary and compatible.
Can:
I have no clue how much Microsoft has patent, but Google would have to take as much as possible and function as much like Microsoft as possible. If Google makes an incompatible OS (even if it is a good operating system) they would not be able to hold more success than something like Linux. Instead Google would have to find a way to run on the same computers as Windows, and use the same software as Windows. Like I mentioned, I have no clue how much Google could steal, but I am sure it could design some sort of operating system that could run all the basic 3rd party software for windows.
The reason Google OS can be free is because the company is an advertisement giant, not a software company. Google will be able to benefit from discreet ads it implants in its OS and from the spare processing power it would take from its users. The Google ads would have to be convenient and customizable enough so that the system users would not find it a hassle or bother. The spare CPU usage would have to be efficient enough so that the person using the system does not notice it at all. With a good way or providing ads and linking all that spare processing power, Google could easily make enough money to cover its development, maintenance and upgrading costs for the system, as well as make some extra money to throw elsewhere.

Single PC based
The argument for having each operating system being a single entity is simple. The amount of bandwidth and processing power on Google’s part to keep all the users running would be too great to handle effectively. Even with estimates of 126-316 Teraflops in processing power, the Google super-cluster would not be able to keep everyone satisfied. To show how this works with basic math lets take the upper part of that range 316 Teraflops and do our calculations for that. A standard PC runs in excess of 2 GHz or a few Gigaflops. The PlayStation 3 is rated at 2.005 Teraflops (200th fastest in the world) and the Xbox at half that, due to consoles’ graphics processors, but we only want to deal with CPU speeds. Hence we can say that our average PC runs at about 3.16 Gigaflops (to make our calculations easier). Now we need an estimate for user base. Currently Linux has an estimated 29 million users. Let us say that Google OS does not do as great and only makes 1/3 of what Linux has for a user base, or 10 million people. 3.16*10^14 (316 Teraflops) divided by 1*10^7 (10 million people) comes out to 3.16*10^7 flops. If we compare the 31.6 megaflops we got for an answer to the 3.16 gigaflops that our computers run at, we can see the impossibility of the matter.

Linked to Google servers
The link to the Google servers is to allow Google to access users’ spare resources, to make a pseudo internet and tailor advertisments.
Spare resources:
The spare resource use would work much like Folding@Home or the one that started it: SETI@Home. Most computers run at around 50% power. My computer right now, while running Google Desktop, Mozilla Firefox, MSN, Word, Winamp, XFire and an assortment of other programs, is averaging around 20% of its CPU usage and 41% of my RAM. But we should stick with 50% just to be safe. Now if we come back to the same numbers: 10 million 3.16 gigaflop processors. Lets take half of 3.16*10^9 and multiply that by 1*10^7 and we come out with 1.58*10^16 Flops. That is 15800 teraflops, which is 50 times more than the high end of what Google has already. Of course we can not expect 100% efficiency and have to account for computers not always being on. So let us say the transfer is about 50% effective and that a standard computer is on for 1/5th of the day (less than 5 hours) and that we split each flops across 5 computers. Calculate 1.58*10^16 * 0.5 * 0.2 * 0.2 and we get 3.16^14 or 316 Teraflops. In other words by getting 10 million people to use their OS, Google could double its operating speed.
Pseudo internet:
With complete knowledge of people’s surfing Google would be able to customize its own internet (Let us call it GoogleNet for now). With knowledge of what sites each individual wants to access and with a couple of servers running a server version of Google OS they could devise their own map of how to more effectively link all the Google OS users and all the Google OS servers. If you are trying to contact another person running Google OS directly, or are contacting a Google OS server, instead of going through the internet you would go through the Google sub-network. Since Google can customize how your packets fly around in their sub network they can optimize it for quicker speed. If you are trying to contact a non Google computer, then Google can take you to one of its own computers virtually near the one you are trying to contact and then let your packets fly out onto the internet, thus optimizing speed.
To keep this sort of subnet active and efficient Google would need a lot of power. They would need to devise new software to make fast net mapping and to evaluate peoples surfing. For that task they can use the same PhDs they hired already. The power for the processing would come from the spare resources Google could draw from the users of its system.

Conclusion
In my mind there is no debate as to if Google OS will come out. The only debate is when the operating system will arrive, what features it will have, and how everyone will react. What I am perplexed about is if the alleged Google Grid will come before or after Google operating system. The Grid and OS have wide discussion across the internet but there are no real comments from Google itself. I guess we will all just have to sit around and hope that Google does not get sidetracked or Netscaped.

Labels: , ,